var/home/core/zuul-output/0000755000175000017500000000000015136057751014537 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136063346015500 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000231263615136063174020271 0ustar corecore|fxikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD p~FEڤ펯_ˎ6Ϸ7+%f?長ox[o8W5L% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHM%vz_. o~I|3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt7L"u 0V9c  ˹dvYļU[ Z.׿-h QZ*U1|t5wKOؾ{mk b21ΣԟRxxXԨkJ[8 ";ЗH F=y܇sθm@%*'9qvD]9X&;cɻs0I٘]_fy tt('/V/TB/ap+V9g%$P[4D2L'1bЛ]\s΍ic-ܕ4+ޥ^.w[A9/vb֜}>| TXNrdTs>RDPhإek-*듌D[5l2_nH[׫yTNʹ<ws~^B.Ǔg'AS'E`hmsJU # DuT%ZPt_WďPv`9 C|mRj)CMitmu׀svRڡc0SAA\c}or|MKrO] g"tta[I!;c%6$V<[+*J:AI \:-rR b B"~?4 W4B3lLRD|@Kfځ9g ? j럚Sř>]uw`C}-{C):fUr6v`mSΟ1c/n߭!'Y|7#RI)X)yCBoX^P\Ja 79clw/H tBFKskޒ1,%$BվCh,xɦS7PKi0>,A==lM9Ɍm4ެ˧jOC d-saܺCY "D^&M){ߘ>:i V4nQi1h$Zb)ŠȃAݢCj|<~cQ7Q!q/pCTSqQyN,QEFKBmw&X(q8e&щu##Ct9Btka7v Ө⸇N~AE6xd~?D ^`wC4na~Uc)(l fJw>]cNdusmUSTYh>Eeք DKiPo`3 aezH5^n(}+~hX(d#iI@YUXPKL:3LVY~,nbW;W8QufiŒSq3<uqMQhiae̱F+,~Mn3 09WAu@>4Cr+N\9fǶy{0$Swwu,4iL%8nFВFL2#h5+C:D6A@5D!p=T,ښVcX㯡`2\fIԖ{[R:+I:6&&{Ldrǒ*!;[tʡP=_RFZx[|mi ǿ/&GioWiO[BdG.*)Ym<`-RAJLڈ}D1ykd7"/6sF%%´ƭ*( :xB_2YKoSrm_7dPΣ|ͣn/𚃚p9w#z A7yTJ$KOL-aP+;;%+_6'Sr|@2nQ{aK|bjܒ^o(מO80$QxBcXE ء\G=~j{Mܚ: hLT!uP_T{G7C]Ch',ެJG~Jc{xt zܳ'鮱iX%x/QOݸ}S^vv^2M!.xR0I(P 'fΑQ)ۢWP Pe>F=>l |fͨ3|'_iMcĚIdo阊;md^6%rd9#_v2:Y`&US tDkQ;>" ء:9_))wF|;~(XA PLjy*#etĨB$"xㄡʪMc~)j 1駭~բ>XiN .U轋RQ'Vt3,F3,#Y3,kJ3,LhVnKauomˠ_>2h-/ ђ(9Uq EmFjq1jX]DןR24d ;嶑, }t&&\5u17\I@ 5O? ʴ(aPqPϟ'Xa>EE衢^}p/:F?}bi0>Oh%\x(bdF"F 'u Qx`j#(g6zƯRo(lџŤnE7^k(|(4s\9#.\r= (mO(f=rWmd'rDZ~;o\mkmB`s ~7!GdјCyEߖs|n|zu0VhI|/{}BC6q>HĜ]Xgy G[Ŷ.|37xo=N4wjDH>:&EOΆ<䧊1v@b&툒f!yO){~%gq~.LK78F#E01g.u7^Ew_lv۠M0}qk:Lx%` urJp)>I(>z`{|puB"8#YkrZ .`h(eek[?̱ՒOOc&!dVzMEHH*V"MC Qؽ1Omsz/v0vȌJBIG,CNˆ-L{L #cNqgVR2r뭲⭊ڰ08uirP qNUӛ<|߈$m뫷dùB Z^-_dsz=F8jH˽&DUh+9k̈́W^̤F˖.kL5̻wS"!5<@&] WE\wMc%={_bD&k 5:lb69OBCC*Fn) u{Hk|v;tCl2m s]-$zQpɡr~]Si!ڣZmʢ鉗phw j8\c4>0` R?da,ȍ/ءfQ 2ؐfc}l 2窾ۉ1k;A@z>T+DE 6Хm<쉶K`'#NC5CL]5ݶI5XK.N)Q!>zt?zpPC ¶.vBTcm"Bsp rjﺧK]0/k<'dzM2dk–flE]_vE P / څZg`9r| 5W;`.4&XkĴp 6l0Cз5O[{B-bC\/`m(9A< f`mPіpNЦXn6g5m 7aTcTA,} q:|CBp_uFȆx6ڮܷnZ8dsMS^HэUlq 8\C[n膗:68DkM\7"Ǻzfbx]ۮC=1ÓOv$sY6eX%]Y{⦁# &SlM'iMJ았 t% ~@1c@K?k^rEXws zz.8`hiPܮbC7~n b?`CtjT6l>X+,Qb5ȳp`FMeXÅ0+!86{V5y8 M`_Uw ȗkU]a[.D}"\I5/1o٩|U戻,6t錳"EFk:ZM/!ݛ@pRu Iヵvyne 0=HH3n@.>C@{GP 9::3(6e™nvOσ =?6ͪ)Bppًu_w/m/0}T>CUX\!xl=ZVM\aٟ6h㗶E۶{O#X26.Fٱq1M k'JE%"2.*""]8yܑ4> >X1 smD) ̙TީXfnOFg㧤[Lo)[fLPBRB+x7{{? ףro_nն-2n6 Ym^]IL'M+;U t>x]U5g B(, qA9r;$IN&CM(F+ hGI~Q<웰[, qnriY]3_P${,<\V}7T g6Zapto}PhS/b&X0$Ba{a`W%ATevoYFF"4En.O8ϵq\FOXƀf qbTLhlw?8p@{]oOtsϑ`94t1!F PI;i`ޮMLX7sTGP7^s08p15w q o(uLYQB_dWoc0a#K1P,8]P)\wEZ(VҠQBT^e^0F;)CtT+{`Bh"% !.bBQPnT4ƈRa[F=3}+BVE~8R{3,>0|:,5j358W]>!Q1"6oT[ҟ^T;725Xa+wqlR)<#!9!籈K*:!@NI^S"H=ofLx _lp ꖚӜ3C 4dM @x>ۙZh _uoֺip&1ڙʪ4\RF_04H8@>fXmpLJ5jRS}_D U4x[c) ,`̔Dvckk5Ťã0le۞]o~oW(91ݧ$uxp/Cq6Un9%ZxðvGL qG $ X:w06 E=oWlzN7st˪C:?*|kިfc]| &ب^[%F%LI<0(씖;4A\`TQ.b0NH;ݹ/n -3!: _Jq#Bh^4p|-G7|ڸ=Bx|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧvW%1T~^[VQXodՔz q[*ڔC"1Ȋ-R0ڱcoF4 3vFf#8^Vє+k@ :)@%9@nA B q 62!/ 6G (" u:)fSGAV(e֖t܁ ft~c.!R0N<R{mtdFdHÃФsxBl] " Δ<=9i/ d ␙F9Ґ)Hnxps2wApP!se]I)^ k?'k:%Ѹ)?wɧ6a{r7%]_Ϧi~ԞnZhubW*IakVC-(>Z#"_gi(`0/ƍcv7go2G$ N%v$^^&Q 4AMbvvɀ1J{ڔhэK'9*W )IYO;E4z⛢79"hK{BFEmBAΛ3>IO j u?d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6vAVKVCjrD25#Lrv?33Iam:xy`|Q'eű^\ơ' .gygSAexپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>aj '!9MHK:9#s,jV剤C:LIeHJ"M8P,$N;a-zݸJWc :.<sR6 լ$gu4M*B(A ݖΑِ %H;S*ڳJt>$M!^*n3qESfU, Iĭb#UFJPvBgZvn aE5}~2E|=D' ܇q>8[¿xp/9Om/5|k \6xH.Z'OeCD@cq:Y~<1LٖY9# xe8g IKTQ:+Xg:*}.<M{ZH[^>m0G{ ̷hiOO|9Y"mma[sSb_b'Rv&{@6; KE.a\}:<]Oyve3h;}E[kMD,5 %sO{킒 8.K\i/`׎tp NvԻV4|<{H@#*h{Yp/E%dlh\bU:E%h@&SEK [ Ƣ xgGz%ǻViX~鮦w35QE~qp[ʕ@}ZL! Z0!A⼏q)[f &E1K3i+`JG P/EG 4 9LڑKL|`PОnG#|}qOR{Q|2_tH߫%pD?1%(@nfxOrs25sMլf{sk7݇fjӞh2HkeL'Wʿ}Ƞ%>9cSH|cEyQp 'ˢd:,v-us"Iidw>%zM@9I?prGq:&_p3õB!>9'0LLO]M[ltWVR9I5YpVgtuZfG{RoZr3ٮr;wW:͋nqCRu1y=㊻Ij z[|W%q0 CJV٨3,ib{eH7 mҝ(3σ_O/̗-=OR\dIoHZ6n`R֑&3.Mv0vԬ]I˟vrK}F9X|FI#g.Gi)%!iK|o}|ֵ7!ېATJKB2Z/"BfB(gdj۸=}'),-iX'|M2ߖ|ʒk̡"_#uTP"탕.Oj}XBfKe=cJ_ XC.l.;oX]}:>3K0R|WD\hnZm֏oOp};ԫ^(fL}0oE>ƥN7OQ.8[ʔh,5tW> l/`I~>|灹mQ$>N |gZ ͜IH[RNOMTq~g d0/0Љ!yB.hH׽;}VLGp3I#8'xal&Ȑc$ d7?K6xAH1H#:f _tŒ^ hgiNas*@K{7tH*t쬆Ny497ͩ KOAH$6׳o I;.K*!<+"eK5c&`X:#;@B@[(K44sBFu M.M~X+EǬ.ťqpNZܗÅxjsD|[,?_4EqgMƒK6f~oFXJRF>i XʽAQGwG% C<ˉvRfQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$?D)~?wy,u'u()!d}uqy/]c 2ėi_e}L~5&lҬt񗽐0/λL[H* JzeMlTr &|G)Q`rkKyt1?[ˋZ5NhfӛŮ Qu8Y4?W֫/&W˸~%pqq{% _K~,#/0'NZ׽Kq^ėSJϼ6#j8GO[ PCbʍN^XS&}E9OZC't$=tnn&nu [}Ab4{W4*@`tF)Ċ+@@t;6q6^9.EPHŽ{pN>`cZV yBBz-p^:ZYUv`Ƌ-v|u>r,8.7uO`c Nc0%Ն R C%_ EV a"҅4 |T!DdǍ- .™5,V:;[g./0 +v䤗dWF >:֓[@ QPltsHtQ$J==O!;*>ohǖVa[|E7e0ϕ9Uyzg%pg/cc6RS`HFLЩ LkJu\!`0);Sak$Vfp~C%YdE6c>1ƕ (0W4Q>@>lWN"^ X5G-nm.8B>NOI[31,j2 Ce |M>8l WIf|\q4|UkC.gr`˱Lϰ} xr.~l-ɩu_Drd31V_ѺUib0/ %IYhq ҕ  O UA!wY~ -`%Űb`\mS38W1`vOF7/.C!Pu&Jm l?Q>}O+D7 P=x@`0ʿ26a>d Bqε^a'NԋsI`Yu.7v$Rt)Ag:ݙyX|HkX cU82IP qgzkX=>׻K߉J%E92' ]qҙ%rXgs+"sc9| ]>T]"JرWBΌ-zJS-~y30G@U#=h7) ^EUB Q:>9W΀çM{?`c`uRljצXr:l`T~IQg\Ѝpgu#QH! ,/3`~eB|C1Yg~ؼ/5I7w9I}qww}U~7뭱ԏ,}e7]ukDn`jSlQ7DžHa/EU^IpYWW兹Q7WyTz|nˇ _qˍ[!;n ^b k[);ng]ȶM_u)O_xV hx h[K2kـ`b duhq[..cS'5YO@˒ӓdcY'HAKq^$8`b $1r Qz?ۧ1ZM/G+qYcYl YhD$kt_TId E$dS:֢̆ ?GЅ'JƖ'ZXO݇'kJՂU086\h%1GK(Yn% ']Q; Gd:!gI-XEmkF}:~0}4t3Qf5xd\hEB-} |q*ȃThLj'sQ %؇Gk`F;Sl\h)5؈x2Ld="KԦ:EVewN ًS9d#$*u>>I#lX9vW !&H2kVyKZt<cm^] bCD6b&>9VE7e4p +{&g߷2KY,`Wf1_ܑMYٚ'`ySc4ΔV`nI+ƳC6;җ2ct"*5S}t)eNqǪP@o`co ˎ<عLۀG\ 7۶+q|YRiĹ zm/bcK3;=,7}RqT vvFI O0]&5uKMf#pDTk6yi*cem:y0W|1u CWL;oG^\ X5.aRߦ[_Vs? Ž^A12JQ̛XL:OEUپOY>WK-uP0\8"M: /P4Qz~j3 .-8NJ|!N9/|a|>lX9T ҇t~T1=UF"t; 8-1I|2L+)WȱL˿ˍ-038D*0-)ZyT13`tTnm|Yhi+lQ&Z!֨řoҒ"HKX 6„=z{Ҍ5+P1;ڇ6UNE@Uo/>8.fgW]kY0Cgcu6/!_Ɩ} ' Ў3)X<seWfSv!ؒRKfs%(1Lhrٵ L.] s?I,HCԢ[b C-lLG+@_$c%* _jR|\:dc5u= A@kUc\ǔz;M>dUN/aFRĦ@x؂ǀ$6%}N^ \mQ!%8j0dUo=rh>*YȴU3Q,̸*E%59sTzɟڮ2kg ۱wEUD3uKrr&"B:p`\E)j<).R&#ÃecE,dp"nPS 44 Q8ZƈKnnJei+^z '3JDbSK;*uБ:hF ѹ @˿ޗ~7g9| hLXULi7.1-Qk%ƩJ4^=ple;u.6vQe UZAl *^Vif]>HUd6ƕ̽=T/se+ϙK$S`hnOcE(Tcr!:8UL | 8 !t Q7jk=nn7J0ܽ0{GGL'_So^ʮL_'s%eU+U+ȳlX6}i@djӃfb -u-w~ r}plK;ֽ=nlmuo[`wdй d:[mS%uTڪ?>={2])|Ը>U{s]^l`+ ja^9c5~nZjA|ЩJs Va[~ۗ#rri# zLdMl?6o AMҪ1Ez&I2Wwߎ|7.sW\zk﯊溺^TW^T\*6eqr/^T77WNZ7F_}-򲺺VWQ77V\_v?9?"Th $LqQjiXMlk1=VzpO֠24hf 1hi D{q:v%̈#v^nBi~MefZF >:/?Ac 1M'I`22؆DT!/j璓P åiw@wgRCsT~$U>ceއE)BI>UljO|Ty$ŋrwOtZ7$ "i 8U 7bSem'k?I+/{6} %k83ĉ7>gM)d3j%v*N̘D.ֵK j;Jbƒ\d';Jd z 6զ(e1V%t{Ǫ9 7LǪ H` QÊ%ޕ66AcU%\q3ZF-(fk*8;nO/yPH>Ua*/o!{G׫W2y˗4NMݴ/t´.tB/=%Y`hliXT ~":2Lxsi<>[cqK &#˵uwX}+{iqn:D7]۵_"N$T讼F2Kcb!B:B%q.n;zKD2G?FXкjfd̉#~QȢ=+cAdZAXRy֣!’7zj1LTŴpm’b0%/*x=]-(?3[iLi_')Adqr,2hDVy%bb%gi UWzv\25w1R/GB:y5D~]9 8IP@\r*2roo8ﺢ;P eUW|ϗ[ϋgm+Q|a 6 `'L ,q"`&g2 d8Zj'[cdaIŏp=tGg0ne"b ܢΥ~}78,ZV%!r/(,"a)~okW4ur6D e׹V~&߬NgC xS+^ M<1c/ߪ0O-*Á0ά*xX14q)nl!,:>ߜdPw<jxQg)ði5_`ȚN T8黕y{єta_WOboAdR|޴ .O9|(OO՟cӠjzw|^9^c {58pϗ34j2' mּ>ڋI0>?QG ef2EAA4f~ڏ'oqPdHYḽoV!3yTW"UiqPP6[E t) ˧L3 z]Dg[}q( >fyPosAM?Дf.:p2c] p0'8>Z]?;I~k/w"~4+/qJuJ./v- vpf$%YWG.ZՆ.(H#±[uL:=_?PM[ncM&x~=k l ÑGVAe͈ @-8޽h"(j⿗Z+iuVҘɍazn&Tԥ&ȧIWPhi[BypEKxʫCGW!NeQKiW,XqyxӤĉf@x`zCbD]QK!?<8JM G$O!3x$QhQc4uvAqQij #1!gaYS,*Hhʜ"J[PyOsGgyN?m"go7}+9Eȴ r#2K^ل l]a7RA]=Vl؅ڷ\FXR.6"x5zIu?~_*guXږ#1ɓ|Z"Y]waN(O\\n.+uY:ѕ)[#'Hʮ54 &r#-ت;<6vM+y& J?`'ity$asSt5 IC&e[Lv4YFJ"e-=<-.طe26BU.i.%cԡ%M +GXP^ѮZ˰-ќ!+_4ӿgm~M3pUqyXJ[$ }<*ޒɜ_`ZG`=X% 'u57h*uX*rG2Kܱ]Cm Zs иuPQ‰ڢ#ڻ]]|[bQumj)K9E/VQxfy_jh vV`6 ̖Y4ݸ+ ԾAk(8.Wɲ;P[]J0UagXMij.ݮpOiGUY_6Q<u"9 yuR$?᲏*Bsm8^&'u،l\(X8'6t$ewG%GYݒG{r()WgU`;DMս\W@0itc[l.߾jm;.7>@T.H\JdȬ>{S񘕕V ֊bMZ2Wo qU%LY}(2SӖ%V; ړpD g`IJRM5:e WfpU7۶/AH 9(n]Oa8\Urᶩxi%r ߜ75Lgj W7w %_} ޹fk<vꡯTk?ODyb =݁9Nޘf%gӭVl3VƎt7U<;8oabt]K3%[$1Zy$#6GS.Ơi+u>~_ܓgh?;bYT2ғGچ ىn :YKַM%f>kLKHKYUm!8iz QW=$0% CQLup^'I;QUQZ)o|gy;yN]u$ ϕ Wr=/D"U[ŢE"7(^9VFt穜[eIyձ%ę8 BZk.h)r YHh|7jq#39P=5aFODLe} #&ч5C.>4d뮬aތ_!mX }b:@fg 37^AJLOhPc bp~b~;`p76@kJ10ƟkP74 f@9ܧ~bCzX(c% sB0Jo3џY?ׂl)C 8ll=hJ?QbS""X4,B=znh3x}V}5q}:l>J[\>C Q0 E44؃L,bgI3'Fe;[zlf^ 䈅Ok!rv'F?|$8 ѭ8YƢ|k i7qf1Λ'kf2(ebksW_kijY&5CDV7eьn0)}Sж|Q_Jͺ^aNJZg{W!=+[HH$E KrmHvw~gH=qP٨M"oFpf83R1[e{ Pg7+NQ⛳y|Od*zXj 2zQ^NjO:9¢ǟnYuM:6}O9=<P= KwOl25WSJ*gʬP4@?O"jRup bF@]y0˪/<}\?V8Ex֥UmRZg`ϺL[{\.}.K3Ȼ,y'N_NFР/Q@7"GF1@/l Ӄ{E@~? z@<3(.Phgq#G\9@@0XwEp (4B9}@}PZPUb(GH'a]n"5D)m38z qP%.ty= '<>U 3 2I;H(&]<4e.p>= `ߏ$&pѧ0$~)"I `>A& bTK pz}Hmlbu@ SQ8J:*C 7dF.sAaxQ} 0֓T1bTp bIN @~?ʵnSK.."} ڇP$/|/vP0GwYݾ^.87(bع- *atę$=Wao08} )Xއis~}*=@1p{t9&;.94A{9m:((¾DzzAixN#s S=ɗP cއw\K뉢- fbL'HC2)I>UN+;F0]. `gjP4dv(phVEpD g$Q6HI`/$]!3_yg2.n nII;eDBQI@)>1ݒE<)ȸg^,aU v:q;Q]+~,oW Ӽr^q^͔PAawhY9v(5G*%Nl"5+-Yf7I f'٧W-qiS"6¦T3z۶6B\8T+չCFE*!oΓy%bdXW#1krF<𣬰tY|(2+5$2Apم-Ƈ09Ucl~#)fEg=jҼ%__/Ԟ, ʮI,)\Dxb$snSz z9/u\.h8`pgTCG.(D| $=WtlE$e,qu p4`l~_SfEweӧ-Wp4!P tT?8AtP3}r1Ҳ 4"$!e3Wj =#sfzhk\o1^vO_J3r|/c;a^ls/eȍ=gq/civ5Jc^2c<a;z,v(޴m@ec#:!CS79 ybB:?]H_Pѯ_΢o39ѨOCf9dة Ÿ$#DdnA{B Vud7K hyzu/7Zo÷ATC :[w:[9ֻ w:=lK Rz?͝܇;'Ћj *;,Oe kGFp筶Ef{ruY(x萃_yka5;(_/oK(KҼ[sd~Zvk X0hr<5&qséVrWᗣ8{j5_,^O@-^8ίы^ Ev>oSpDd{`{]Vm`C3w1F~%TQHG.H8θg'o#g<'Dx3\#eX6h /OH3H ,QUs+ (w0dL9`q`䧸1Ue#kwG h`&>8H#T҆  YȩKٽ8 "a@\gIŻψr嶃xp.\40e\je F{yT0FnM.nM#:sP$  )ڞ7LERR9̉YN,]9vX'Jdwnp)F`D "7$*uPd-W )lzV`2),z[޶eT &ۛ Lѝtsk8r?k6(Lnk8hNה_m0b'DOCC**J0QE)|'JF|!E-̶ôNv+CW|S2?X{Gv_oxLgW40g0X)coӣqTϞWV1v6O }? ]h &//u>7uB O#>PlPlwBe$@NO#}$uB O#?PbPbwB$T@NzO#{$;N4BGo KK ]d>O9QX5E&PskWy~b<> jRČ0\OjIpO5!:`͘dW Y*)s z5)ݫ.*c-"~F[ (9A6 &àcY?xI bRfdS WAXXqf0^fYOmY\MLvBĝ 2}d)}:T0˲\ynop#挊"AdH1%zmt]13K2of^u^^yd//)BD 0 {x* FJI5wG2Qߏ+2&&'+1i26%2Nh|O+U0. k8z5kEIy;OV!4Ҳ`ɧ #g1DX/7X֯[/)Li$_T٠Zgӹ LR?v!5 te7L5kx2-=B2Xnd\kVvZd=w&v :p@|P\y`_LiA*Ф_Z~CXZ"OV~ "|8}r8*щXʜ%sEhLD]$;EaT~ wf Dr m?u gQ# q  ЮY609FݝoVKMڭ#q/Eo^/Y8>xWtxڷN!ޒՉ@~β??'lI %יă_)jUdфp*@cp%3V_s뼸U #`t u[|H"!c"7Ct.riK>f i>.Nw`,)lA&}Mʤ :.RL1Bj_3F5Np! :fC]xFDkj |wfKZ!t8u1w74VO*'${(jO\_fsP[7<\FZ9Rl nUQT㟛J#K=á]R&2:뚔x[5gy*K,R){x(Ysu2ZJǿM)zfedbg9w\hN[L:uLj9#Y&4ʣ{N4L((ZW7vr3kG.e?^<9_u47x8G[] !^g:!c+}̆\+𨶵5GnL su"b)2+˴4z\+) KOJ WC@g,ZHWR R ~ rQi]x8v˟1%#ƛj+Ic PPBlBEӮ_▫cEĠ)k#yHB{򮵷$2 2Hv3Gd-4ek0=#.Õ1r܀ڒ,4]&j u4N0,qp"3FCj$JRs6c$쒤qM[I } H zyb h89D 8$r#ﷷ[e_$i gVw)rf*8#nrkwM-MlW"_`(=ɜltqDZH:Q?mdY5Qtp O4SD 0n-y_a!LCr`N&Vxt(d6D|N Os^F-×{y5MpUKha{IVI 1+QOGmqH:uUno#9k%<2 \Bd.q%q8b$ณ&&8jMR Kpxe$(V.ǚٿ- ,4ےӼQXrU$"T_%s:+pb$1dzľ,1u^#'V{JYF1`z@I'QM`5o8G Fp|!$#z%%$d8!zt}(/ӄD8W"岎I(ѾJXkˑ'X `8p,G؏ tBF∑tc ]8}JaD_=8V%yV3$2WL,BZtʻ&SjR9GM^QV P zTEg%lC_2z^4EG#ZL6k_@M:&Aaes⃬mTOhݼzg,okyg5s.5h2b;n^1m, FR3}wp.Jٳ]&EJg)d#dnݣj *AжAWٮ jCW";sׇ$`1Y7|=\JhH/HR c";0}sfЂ23V+0ʤ"*ߖtrq&8ʾ6ƒ'ƌ!\2:)d#di>|-pTM.<,MQz r'eljix Q8)Aߚ&8j7ߨ {#Gd,`6QIh?|fvKЬ9\/pvn!HaSPJP*k gJqd;B?߾ߐrpwΧhmQp<, Q_%"#I$n.G|)ZĪkvd03^MI-_ıx[*CD[ )UŘV)g=$niӝ?0d%k/mVk̈́Q0AH:uO8j&3'o84r[ \WJ{wAaRmuu"8 A`<1 &8v*)4Avs#kRd:tpolĿ{+pc 3!**\S P n`H !ZtbujcLt( :̩s5dY P,67҃tJM%] {-4I" ͱL+ WW*FaH̵]i'ӊ#1IعL\w NV. t./M%~digW2mw2i1+1JJX"s0(XYj׸G:=*3+vkjvs¹35б]m:97r9ɂ4νa%9s|!`sDzK1s-a{Gά;E0'7zG3!8̕ Q7ݪ *Z/ov ϝ,QCS3xIWH&e=!돫%(RLzd@c_bmU_̄^ [PjT(8!ȍ#*:XҌ^6B]?- 9YF&`$53惊&-pPNl`24eIy*KYs. JZ$11OՃ"3m[NÚ"BP>KjaĴ_v-΅Y/]y1Mp\pnX^Iurf \q5)L)wP$M< 'G ܙI3Ihۂ/e˸ZeKi^WR$XNPSwk$M"g$3&dpH-d$4FY<ݹ&:=ܗ&8*9)ҒQy8%=67w)e yyj'~\@mx?pIlb3y5[ӛ^[0/v]mh5/jǮfa\Xs7E a2ZY> 0M>?~By¬t JI|qqx22|"/y1orڿwu/˔/r5\~y]]f~ 6T=_~/~ ay{ ze篗+Zÿw+f05m_qFh!t[@JjOwu]򇏫 r*y=˛k.8h:Lkr9<'0~ ,>=2Yٖ!+"pG|(`fH"UiUs$z Cd[f%:<\-+8ZW20,F RR~ r? wZ/%̴Fc0h'^񮉹+z!ؑ:v=KMN6A3G_LzO6%"$$]SdX~fjௗ= @'f^j}eͪ' A=!$,A&8jt ^Bo7sYq͒ԪP)`Z*pFRS>%;A9lt?{ƍv۽ÀQI7"h{YrH.آd;bUax_Gj9쏐s0` ηݢrvbp կcyr7mݾzj;,.AS4x}@`2ˁ!gcLt-Q\XjU7iSeIWrϕA\ፐʄ͇T%L3+|(^C-UhiyRk_@Tf;'b N-{NRhNRZ k.iҎ9祙Tj\ MVwݞΎٝS*HnU1ݞ4Oʐiڱ"񣍩{bvFw ;-EBkQwi]ږC? ==|q,O?l3.ZaŒY/%,O>evOari:O>'Oa,Cٷٛ_cuݛq7pK=Je1OQpUgUmԕGO.dzTpJ t ,|s/f:=~[m6}G 6 ˋrvj v $p"sD.{U\dlyG3緀 w?mp|dWֽ-`p~Vp}ՄzKmEdzg@ff3cϏRԂudʠ+M}yz/h+O,s,(4@ZɊ` Y5)-˯$Kx}Í!)m)yw˘JBn ELQ Y#hI.&J+5[}.wUUo h)Mg,uݓ7Ui/d8&޸,d8hT6 ߵ+`i|c+4;mJbL]x%k68ձVTQhwiʩ@̫ЍkLMH] `$gi . 79F88&03#}دB a]~$( ;~ixwZKqq p H0˾nخyoqOC/bƣmW+yGHe#r˗Ymҟk G1[.kJVa[nØrThٟRބ15U-ba4PUo7 MJHzP%2b߿y}~gICK20T(=6ߔUag- C!PgE9rtvT0 "c҂s0T]PgU9םVtɝp3c Sr˥!8D ÊB(cf¶QgvqXsls`PF&Tgs@ n?@o%qFj 6Yǜ1&z,#A*BUrG+FH+174CkH(3R;?gM%ĜTL"Щ#,\ kR9j0 {YߵԎ8#{l&K67%UZfA ZzU"Sȥe/PKe'8~/ZxyV :tȐF`a( ?QS%ϙX?>÷,L맦&dz@~7F_4E9;}uq7P|c\1\UunF.UY mB_FC Dg`L,qk?DW2-iyz9UrR,6W+K΢ }<#@T/޾3y=O*:*Q!ի1bF̟F> ×CTyx,˲g4 B !E4_ym6_7nm `xFeX3y! j7kʸ`d}7?F6KI؍2 |qesK33p% 8$8z@n Ճ6IrAbh5 mK]LOX2TV0@HCqxu16hF]6As&2wfs-rR”M7A\jvzWX1xg,l }z^[.̄ a|xk#v.k}Bܽ:(Rx PLD.߂y> bhӍ`/BwRcC;s@dKLBݮ’D1@љ ȄO8߈`::=ltf3j.3T?r`DX7?e c$>ȵcxZcJ19dqۿwX]NwW ˄,ETZSvt w[0̎ 8yҙɦYHȊ:umv94#p~l}+~"4kZ>620e"@)}]4?݇aB6UJ[1 jEBRQ@zY˓#G`n9g6cE[ Ԕc%vH^R^_ Ӱ2^ Ms J~Md?ރ}esy3r~Xsyj`.i4dy #rj-{ YosJ|,0*QyUaMg؃厸\&!jזjn1b+`%P%QA5X+U(4GkBu qu9\W`YN@%JI7W>"!9}!s2 (d` X` WGS7hF x+nmbH" "jq&s sx[ B9@SMSoM/YWTP?@02TNğHr-S+acY9)_ L|ެRrʐs&1i7+#a~LTج H;d )? L?0#~o &19#VSS:0q*atao7O~' rC9@Xj.sIrw)bAJ@DTsR pr<͑dra)/ %&85`Ixmac:{\a$- GG0<%J 29*$sPOքKaC6!SQu6#Xx7phzan”< 4!)Z$צCKtgJtːJ.I-%+ ZᤤA+#:$ ZYiBsYbR-/֋l%Y{{8׀ۚ-n'Аr3"&l3tSn9CL;aM1PۜǹNNs2u2$v*L2&Yzo*!V)]-j f:ôPr,Y-5I\B\0YeLwX%3|%iJt, 攊,o(>*sͼX Omq\'$Oab'fYNS "MH,g&xjղcKc0'$7qRfX%y[zQ#Z}(J3mZV #vW{_QΪ?'%QDuh`egh:Ozh%A~󀲈Q4-ܹNkSlLv3&U98(Ҝe [t!&F9g$MxƲ7bt`DoLfa%nW6iW/5n HZ b PJ0,?yE+eLvhj`Aug5+S ;.o@sk18G#x(XMdz!`K'햘oX-WIyW#Kp~*F `[nl#mK L _>Y3m9q%η"{Ԅkbaoo_"iSQ*; tF3h,GtT&V&Ta\"JDI)iq':I4i)e[݃h71]S!$p%O~awؚ;Zn$yMH#84 f`0}۹H6R 5-iPn2K޶+ZZf=yVXj;$mW4TBSl;Gԯ}y*GKo{6۞ϖh^UևݙZץC]K(|_?*3 O)K}VzaX(?@X6OCzw*=L~xS7q'e_ū27+s6'yLLJی%gM fcMU0xx砑K̤9)dN**[_@%\ .wzmEahV8  VՈz}nkDX\w9 plMG@EZFx$Q!KzM„bu}muj6lY lm;T:Nx<| R*,܊>p*jk~`C.Zh֨_HyX.377tqt9#9tz{vFqS'2j+DVn)rwz`}Iͬo޼Z^3+ʫS<]~=Jq8zF^]J|zIU^kW{`~b,fG>x+d|u1Yz%F0phxz׃{q]G?_ >D?%z2Vq'~]'ovW9NXew Jl٨DZ\u;J+Q;DZK'm PO!tEaYnZn1tǛiJ:(Lv{[OC.kFsc a*FZCn]ܐG4@O{T{CQCGMKnne*:rb526"UBG4\E@#іJ#FX,!Tmotl XӖq3e[ Z84(YA kڕ^-%NҲYD)e4<^ۊ-JׇZGn@چż#XPD,wnr~q:鮤tppC3Zіǔ<߰oU&+ۗqO DqX*@xJX5.k/ٿ0L9DoUQfӄQ@ oţ>7Aag"x8q*Wķ (IJ^>=dx%u 2W}L W!=]`f^\?;gPVJg # ;|a:ֈ% eWU>dKCSSjI!fώ%Z7VNZDï7,0u ],dUhb>)KEBkjrf· &_(!l8XH`y m**XVy!pfGlS ORA|KnT{50=`KF1}@BULb>[ewW&y]:EfHR-*bչ1_?AMy;ɨ8bPneW*pӢP}Ճfg1 .ɥ 4P eCڒФnPNNzT3ze\~F&cF[XWR3.5NǥF0 ⑧x6;\ ^Is=T%q6mZQ(!][͑<.Q$񶟗?ρ *:4v7 h8Қ=6*a]b&ѵ ½%{|QtیPr8qh˫Ǚ6.Y< j;4Aq(3]8( TfCLU ȹ)GK_դ,qz aX,Cb_^Lp{GjiLUG+\W86͜^]q)8;KQ"a<y\ &εMC)?`0rshd"D iGۀa swkBJP,7"|6 7?gN/zf~uv?dFTSp䅙 ldyu0n`o7^bs8!$(} !tʦӶCQ[%4:hFOųTh^)dZ | ~P|N)XU0,a m]X-n*\CpWxV|7Z~dUظ6(;z,ٸXчzŲ$b, ̱+\gijtD IQ)I::DdE5]d=m-2wȘ<)a" zJD;sjf^,ÚHwֻv >;Lar9ڊI/¯(Pn|NgA %8el2iE*!=7_e@U ~bv&^1Iu n^F4 5|?&9Z&%%iT⦬8ߡ1qL/k/|RKr- `e\ rtkUM<,UނU./\~TY=]˻w]f p Xb" 0ę8a4 Λ;[Fd 癌WͯnW[o&&WS>^ҋ; v]oyz<ɲ|{Oa2 ֖o0&Of%/_gFQnXˌR(x{]ޯ]lk܎:CK+$h-7e;WQ\cIsc(MH2cu4ӂ_ibmx47*2:s,NLǃyt gwI ~,Aiլr/ךb4u8ߜczH_lJyY,aaû #Of7$57/VEFX@"##lq0O[7eܞfk|O`x}_M~m],ذgnkL'Z[n&5wL3)2,= xDƏLά˜zۓd(;y?-WެoVcK~˷e_yGzX|ZoBU?]T>ǺuH$@3]y@?dA w@kӻEo,o~}xry RAǫI_YhX,5?/~N =!CUsJ:` ]c%hUE~~^Y?uMԦin:̖f>P֭۟:FO5;zZ,MV˻b*l Ew$o/_I]/B~}{ Ul_y'{mj\RY’D2Ħ'"8i#NH:d?'?}l4%]W5w'"|%y4$ 3+%b$vTr`z3*|(V* h6^"hKrXT̠8Z1Q7Z(㈱M.0ɓ4p+\(y_:d;a"y;ސj=nV[_+,OìRmL5<"9Qrm 8h>qyPXD!P`. 3M\, rQ 5 PtoO?_n7`p0hDRo5#A2J,sE \p?݂G>f=⶯-y, fO\Qځ;s+$@.X0ƌna#u#n˫ۼ{@ڇÐShr$%iS`RyY} 8kG6M'! H &[$a9D i1}ޠ_wAcȇ*0? l0@ݹ9!0ò1K 1ȇBTUO.IP28}q|zEטqoD֊ FM7 QD^I.:?@߂G>jCsE\vߌB}f5Oi8ڮfYmfI3v..4cJf8ivߣ[1@Vj|EF /6U2%Nvb-, D39^W̠&|(N %rEr 0&J\ J|5 k)Z;Ba\jHޟ5Wϒ,QzÍѶ9PC! NJjӉ*K s¡yQCAy 8 ttRY /4vD0b:z' -lqC|HuN[jS;jjcfXL X .6u%h5 3ȇB+dcyzQ͍4$m5Y/XE+"ZJ V3fH;Є":elʁm͉+Ι6 ]p8(W7E;<I tW o (cӆ.>8tmE>r9&oҭCwȨO1ꀰHI^;))*N&Y  D!9G rPtYF%~}rU2ah᭥Z3D ǔv| 䌙F)Z⑊椄&ODI<K\Ir/fl _P|x) =I^TkAD$LDv9x"bP;" 5F}q2*;+X5NPT00Z#QqpJo#CF.;ōJPb%$T~ B+mԥb`ƶU8@|p=-z㘸J#,J颰_>>^ą* OIdSb,$)gK:dw." 2DJn,DOBy+rL-B j{PCA1aj麹Ou/!:cOg7~V[޺/2vNP KYm b8KV_~" kYCs]|/TRqGJkN'N4>K#2bP&bya"2jv+&<9Sh"8 Hg$qda{SHcG;E>r1]}QI2*NWJM7˟$ۘa٦-X2H|qz\EHQC!)P暑O _ÔK_׉v2/v E>rMUU 5u&Jz 3 7 -sP/&tm2Xkc}BئY&"l WԲǰG;"6k4Qj5zČ7g5s.eȈuJMQLp(6 mDD0[M7Tޕn5ޙ[-HC>z!¾GʌxC2%Ee%:J^KV׎ˢȇBƳ=bք;fN||֛^1gK3y0.ہ`@ܲ2_z$I{81mnKr#I,]_ewjN" L _5;o#LUl3ٷWjIYfk[gPZy 2k\6DE A9" McL6i!jqVQQ.H{(mѫ|Z e͘]u0=FQqMGLy =d 6w95#w~5Uu挦&sP"诰(=ɜltqfh۟"[7{X.7p.{p9ǔ^_qnɭR%.CF#@q*gj 3F5jSǧa$U8%a䌇#ߋs̠Lg PP  U=lĎ'ϗ8̞Tq5gAV;T^|DqŢ5I\8jtsݔ'‹'(3?f!3PCAN_́ v_Q_0PskG븚o|z#s.T`ňW'mcrw+,k!d7nz/;dTbv2oIE9=%fv\ŪOoQx(8`r5娜bYR^T,x-:ޢ򡐻.CA1zZZ.𻮚ϑ/GHtQ6#%`_|Z|X?)#tnTɫqŮ#>#_z<-v4?,r2U);[ܢ aVfG*{%ʨLV#6 P0hP0WOk fZ-=W·:Uqo,G+5teոhem߮۸N ; 'G!t'JL-UT%UH#mQ!7#fޑsh6=f$HFE W!NPh92U#2\Fj@UvDDoJɁJh9ghT̠LEuDS^*].)Q,aD'm܉cbl;qE>_7jf_c:oޔfۺj+ДQSIcFi;#E>F?^ okwq[4>T4J\5ڪr(+=UoyAԥ0+ F uoK(CFڵzvhVm [5z¶^)SUVt-FG3cE̳)C>.L}\u)/2Gvu}#}GwE߯?K#*}7t`ޠ^7o*ll%#v\17uw5RH;.qgʵxixe¤Chmg4φ}l(|_ӷiL ~PU}ޔ9hCFusF- n)7sH>Y8Y˅qM$CFFj:v>Apa j .ZRRpV@k-PCf=R@I~J.!#`Fwue6w{u#N1~t#hh-[dy{%e;2G[A\DJW@%\QT^cԈR}{W2O?Oj!f%H*.Bm/bԸIErzz;R~TjP]J͉dv7G:;*ѯCFy}ƚdw*hY)Qqzh:dr܎>i}UI%tczrsC\iSҴS 5hÎzcc1ð{al]HWzoŶ.<Pq Vtwm#XPhw1~b᪞+T<^YV0> ܶA'#7*lÐV!-"%Ք$4{s'!<K7fh{7AEװ){,= +H :LMÅ#Vq^_W|Tή_Z-68a;Z cgׇc[uwCyퟯ"{_v>Px"~}?2(ۼ^-](G璤י$a.c~QX@"g9),Q##Fte hA#zBI>HA9y:dSvi!,=D)kV%D{e{@ ?~WjsQU'YI$A@4ak͋~ŗ崪ĝC0yzuMgZ<:¦8|4 GCnL+©+cU!шhDSj5/D,sS ct~ M~`|GB~ܬ 6iZ=İaK#Nu^*]ĸmPSCS r;ݕBa-TqQtT}"YP2͞dL-`f_z Ia{;49+"3Nfh \IL9jM硗F*2K9?}vVGVoӕ=2j?|Zwf FIG\N6W^{,s*긺mi`ZgV$V]*h3 NK*CXn dnHZhEk6<}( QxM$r6W %aiV7]*yxjEQ-S&R6Pz7FI?,IKG|-3PZ+d&Y~F,0S"'RМHf-8 Z-^YH.I%Y/_fŇy^ja~lj8#6XsbO lB.J̐ dB$·X0&n1# ݣWImdU4Qɼ!ۜPz+kݜިoB19{=3)$MP>fĝPcvo*ԓ]O8[UuHNg(6N u*&;R̚:ysTԘD{s_۝/t[o/mS.Mgg|*Fވvf /m;!;at|v6Cܩ+hwb7pKfԝPĭ]E!xUx=w>kmy}y 11)y4]mV)1`Զg=k*WO^0#@g!C"fp;ьhG5 0ct"n1 %?fA<1a2eH/Uq%2aU ך '2Yݾ#nԆo6qb~ҡJMQN}(b,Z1Z(Q њzk#-| 1V4Ш2 p'hM"=bFi Qa]2X瞢pe9 Xp¥ J30ȚOڍ&ή_i˟|wC\o6.įE(4/+goYƩ%*oR/jW`qz(pAy./{;Ti:LG*+>-Gvm%cYvUfWmۏ*|[-e:˴bRE?&-LiJ_/q5?Oץq:p &e) "s7&?Mg0)\LM>1Q>`y\,4_-Їm~z3϶Aw; ?3bkXĉpEmTamSrHhNQ&29[O֡Ձv=/qC/fG_N}w!c,z}YT/W~Y8Vۅӹ2&c_Ӵןk ]zwNҡy?} G2qS| ޅ#r%߯GueݥRS,ayYmwUÕupZx,s9ڡ} KgQsN˷PN߳\‘9lw9pZϊGC+ vK{ 3Ӎ}ɊO.Kol"ەwj xe&N.vb)O]8ų`بp}ر V=J/vp@H o/akZx*Gcbc߭W)(z%Ph_"&e I^bxJ*3}LƎ)6]Akm-I}-I 1V4Ѩ:azH4#NP$V1⾰xT7] 4"s/lsF}-2w͇}!gSe%faYqY:{n]#550s #2])JlvΞ"H@509j#UrXq9e:͡f'<CG,3 Ƚ$yvr$"9f'ӊd͟JBWMI(`w J8J|:yXSՓWnQy˘H 'Ŵh" 3r%#lt gAYe$ ,fcܛ4ubwHಯ!ng0Ahʜ" NI>) U.𶳒g9<}z胴#s1؃ 3nd s|-+J'v1erymjGoq 1vdk~ }/Е_7[O9S!j ɺ |; l1+ϭ3q yr7_XX q{ҟ i721Og>")_._y;l!O@2A%L4蔾qڃ!_*7I?PowHHB1Haeni#`Ǥj^_5=ol{}̤Ru6:Uw p8r)wNѻy+}ee0xտ^_Қ⇆F#Y_gʫ<-@VeM;5F6a2'wh1? ̯NOc35{/Q_&?pl羳"lI$߯N"J{tAҫ{Q^~1TmtU=flo/6 AI>]N֋ i=Kdyqz}.PZחŲTJ~ӧI5o_~Mr3@1.WL$*-0R Z@bt>nѶb}= q=5V)=څj7HJil|rdxdɭ:cFѼq$$ )؁ #TBkJ=@@yAT, +vɿw|zXb1@jC]'M8W9z - )D`ٻGnUE cCra;nwHcI3}ZRml¬TM*Vmv3,&aƲ؏ ]ȗR_肤T\`LE vb/K*'g=ګ"tQT KgɈ<8_ǩw1|<{K\E%/gjĸRATK"їmVZ``?1k[Ƙg>!X>! ΍1Rn)]:_K^^G"[_Gb0-e%ZC0M`9BّM?X70jeum`E,kM4LIg?0O`[+pʼnW!1zabZ\0ķA*%5`=g~=gɌp:zmOD$Egc__X`o8[jrLXDh \P\`JFvRcosi3"sIcT5T n[`Z\ E|1~cj*sh{_ή,v @ ֔ 8O -)8V;хiw!MG2c2T~Y$ݖpudy\ VÅ3&;{ &e2c$QG*Š2 rH:F5-&%pꉭ-1@/G_*kz".nxJcֶR^X>SRR,^ FyYyĭZgaIHGvIh[2]wWg#2QI_O tpm'qQ83 Wqp^tDR(B+-k.E2kэomtcx0z9UUT:KXHK 9/=%,vрۅic*4:ƲmTJ#[4ݲEek jF%7oS+bC={=uJY+X2cK[SdO YwxS[QX:sd٣6cJ|sA*KCJ0! 0ӝgiy%4m)pp O4 "K#^|9ۻ/iBx1<ިOBǽt?T_dN+S1-B ҲUcC}u1e S6$ސXT``JqVFƓH8m&GDeu+f}HMO|l):{\0zѹ 飲N0 hʭj mQ 8RXmQ9d( |7v ~z.4d@C7dސh8g4wQV0jkպ}1.嘵^VR0DSfVOq4owz3^ o /%/t2;RzPZA6え,6ϖK>vG02mITjw4ٰޅ9d6Ou ~Π8A*%sIb(=eWs Weo!YHgan"耙ò%~`k+nK^'5?}?hNApZ95@b`~l>T-\9;M4k֨7^cHlCXO `0~r .cD>o֬v>~yKrPyڧoGU.]|ũs %$>-,ο_!NqVR~A'x܃55l?v q^2O4Z~_~>Y2?/7ٗ>Gh.=ymL" MSp7kDZ=~5xl6w0?&9(StTZ͜旟b{9ګ6~{Y$!>YAf+}YὰpzT:"eUFr, B)TLXO=$4^t᫰.~ ~3XN_FN [S({x;$$ar>i _s-IkX^Z_jv|sq?{֞OMeta Vv9mp,θu~%<8\nNjױ24!i VT5s"U5'R x+AiVtɯjڪ\l%q0c9ÉMF#x;yI]˘Do Qas9iuL"`/_5u4a@`%dγ4ehC{}HԴ +sUTDp@i O65Q(45wLǛ`zЁǖc{1{RU'dTШtMs8 }> ɚqsUkAչ:2W_ yC ؤn{d=L3s"r֕7CWdC`VUo\26CG}$aw #j%KAEvŔ ˀ`h:V.6u"+@)|vM\0 GN,$+M_"vJʅo] >ٟ<~eoeW9^|4z*UA0O fAtY|Mކzyo}!Z0z^W)krVR[d/66DB2$J}W-L8؜%(hk! CYQa:;X0zYzF;"K_dX5T_Ͷڂ@bi;GP<Jc99\"kQ*[\5eES("yC,$Pz[֭Ǵ0ķXr ***K@KaR klO`H&@L9/xm!%QdֆtH먹a*6U8/}őҘ[Ne w[L=#v%orhϱ+H@lfu6F+f7cY.K,\4UWhe#2,YyCerXačS 8-M"?-&>ƴJ#uHTlCs5t wےn_q:c Y^\T2:jl4WU )e!GZX5 ع`+TZ׺zA٪ ޟ%܇S+$ޢrרC%7%V2&V} * YR*ي䅉wx ^o{b#]@| Kt ޱ( d)Yz&eZ UT2z3+ӫUNgҫgf7>5ocw[[8w^En(tG_)mzKkӿɸ[f}}O(Z,vגJ@yHB5qy6j}r +E@Kn7A\Bp`Yp;1;ׯG߹$Q JaeEqT>n.i᭡?tNp71yx*s_!n#Fb*rF:]O-B"bžE  _;{wTE}\9bt RE}jaZvuӆ s>^̩׎jJ*o`Ѱkb㲨LIvqm\~Y9=a5g$ +`N*e '+Ƹ7ە}\9\&Ϳ4*u+3~*3⸜~~L6).GUnQl+F[.!SX· Y%+PXդ٠ŷ/BPwqe\@ݺɕ3U8kGWf~2=u9fN',zۯ̱..HxiFC0˜Sٍ#J]%?A#]3⒧ [)fJ 6tXAχ4!,x{&80$g`8NZ^ʶ*砘Pm %(X6 bW8p>:bvjL8ߧڎ'C_5]\!AzDM 3PcLT-آ<}\92U SUeՀ>G̖vw2碎vWcZx۸_v dl>gHMo6hҽEQ#KF{8zeH( E{49$|lBo{OrVMDl?T%6>9DX}4L8̜̉;k MC 14 ~(zez7Ti;3C-1>cqdGBCK^D+ E'N΅<?ނ"_B3$gKq|Va -'%gH~$\˒9xs@j|mљp`nrxR%rirv0_099o'I\P-r*g2q+zt*>D:8MUYPBS, ?!2Ws5mNf4oԄĒ!C"BfamhAEI̥U'LEl +L#D4~ x膾jQ^*5sFbD(x?*1JDԨ[5Y5 Y2 RsDFU4`hRdR&G$1{M7x`VMk'[̚CAr,Dz@0mNwI?eJl,o\wIeq ;^HORG7ȂPFe`dj(-@>Ie穯HB7&Ɋ_3ȟ, 1#.Q“`DNˮ^vlVȥ &E ~%-%Us"g1|6eӑulŵ&$ҹ,}sƒ,bbE@M~]]ZHa%,iptTjnEj^؏ RQ@/Sע4m9f◗3nZ!D$(B։i[d/0VNNxYHhD27l<?1S"ga􏞌F&6LzG!)oB mQuH̜GӴaK@p^?L"~;4uhqi7v.Ko-\Udp3G(P0:.^ GA!N2j,giàh{|я&LNגTbnEϻoRS:|ҘDbDݻ:;`!qNH7*ȭQl̉5\"HH*i#tFǧ1܆-[IvH̖5:r ,3s\믦6u(x 3LId&RC %;Ji43w=S9*QFO̐U3a]U\q0ZAEMBm :>1AȥSD; =2j{Fbr %H.-qB ~[w$@:ھKI9*UK*s+;y𳯺3. +AOߍZʇ{h$fN?7DFS-%S=#Nq^9z$(;+FAU{hePvrL_H>B#Եr/*"9{㰻)O[,sz,w!s^ (8-2`f4sU]FCI#1s8~߄ 2uo"4|uPuC_Z٧*iWy Q(eQwt P% '#"ӆ71d|s;sfYS^ Lcr2hDʉ)",ieNe˨"'$%D(;/c4m!:#)Mk:l7!,{s1A]@c'8 g#?<׏{(1hv;+qICF!)ua: B{: M&bn uln#OP3/nP{h$fId>]@ R#{h$fj[8!=43ܳNoxѲٓxk;e:n9fջI5Ϫ\j8^ܰg~Ultqzz  'Qs;`ۃ*藣ͱU{gg;=?skJLU5P:~Y_C\ Y,lU%eQl}~.Fތe~/Z_Q:57ѳGofj\_Z* o<%_ߦg(&g7s5ئ;D}>vj4dM'Ix]_ȳl+S8g^{fߟ-a_Xz=fˋ Q GUX D!$GhSJl,+0j2J\ 1K3Vr󐵧˼@ّU͌@e))K#@zE*BB k(|`O\YCi/ ?/ ^cXzKfdž]1[K9vyZ'ht36ތa' /mrrx¯mZMoǹkb / 5I&73XkEb9h=+y%3K9 vHYΜ1QDZځMN\){奦\^XY AfQYJJn·Bbg$2,^8O=&HS$׹h)LJn?"aJm.*e'1ÅG1EO06xc…l%BY+'jbGHB 4ƅYBJ.$(R( .T)zʃU0X{KcJtF䫆11ßqQBXW{ c>d{kgw|=Xss0St[|,Ǟin`?~w;CXfqE]4:q=oS Ɠ`yH3{gǃ95ʝmF/۶_/cПoj?J&_g;InO>$77ow vLӕ2Vwv" Nfخdk֫7t7j*bFxgڟ//1Ǎ ^P m1Q{jvTifxD屶Sڛ:20u~X\׫~wX)ONgFXj+l 5:a970argWFzy4ZYRϰZ G14h}M}!cdf%x9Axc-xgOsVpZHן.χ..NPQNtT4c;Ϋӟe79Pf GA~z7psV^?'ƛ| .6! A>]D$ZsEє809Y-vNy8v|q,hl%Uתhٸs7_Q鋽cblc>;݄pؽ)3wDYbNo2ͱ8T$ rH7Ir1^Γќ%IɇV3wzwzwzwzwH!-:>$:N$:Ns;D7U T4q2ؔkk J]qQj0w؞rAE wF>K}*ƿ"'F86vt6t=}ӗhPGkQΫȦY "~xUpշ>\]|asC? 6M{X"+f`Z]J%9"AdՕg$1e PF)-3dIe\ٚ0Vխqo!n{>[]ڲ+ 5mo?{W6 /ݽ3 >hN3um^MRn7o[I.#ˬ<ꁐ!ht TTqY W2V8u~?@/S<`V![nu^Ncʠ(zQ6c={0ǞQo7Z $:0I6AԑΡB{0rHR=HMLc1*ҩ(ibj[7"Gns1_Ocx﹛J[H BRgd ],э&:i wE$"&kx}|+` 6{(h(dhރQp\hWHaNqܪ aAQKM)0kMǬFYJ4µ^j\xC( d XL)L{ Y4dnي&,T`J[Ԇ){; ٕ)[ٕIm5#t˹=H m45ha JֺZkǜhZxۜ|^3EHe9@:1Fdk}'mZ|и]g5_Ke|(Y(/ ڥ>0%vS>*1UgwYSӝeaZ6#avWb-#NS27 $2cu>;fpku[])b8cIMa RqZ\]!ZkH&H J4Xi&2 c`a|la=_(dK?k+ RsE48εG: oMF'zVJ[)lWp_:O wMy^/m-1ڱqm%tl\,!̲W?KTGC`&څ~左FN v,e H,{ =ǐ͍^$Eϸh5Y3{Eȶ!h c@A&SD:cԹRL]La`7 8g }-*aۅP& -,HP\FX6܇yŏ]T˱y*-c?llʥ|׋ODκx0Dƻz H.޿յ\ Ze&k+FUq"z?~xݻ;7KX M#po@ZE-HS I!ty]n6].K"e:Ҍ2z&tݰ!pp4b靵aa"@ p60b(`B0RxrZYf^5>E%3; LP3epZFT0w[dA@GSi]dtxbCGGjKG C;mlt|:lex :tڄ$??H3Zʄn9J. n8aXRShdZ/0_%EvO~q;pLu֌b2d#|1lyܙh:JR|&9fm4U{2vn2az*h4CT~x)j6Eͦ5fSlSJQ)j6IJMQIRl5fSlNH%Rl8NSTp;N_4R'ĩ8q*NISqT8-8CbDR[zCѰt ~0RԲHUI_Ы_߯߁β90eu>J̙.yu?}uP< f&=k&a+nVofwm@oe(hD9DB9FhU=*rRoӯ!jguO85=si^b1ǰ;6/~ݭIh%"SGo_Ͳ0#ebּ"拓WZCfC<^r۴j`0TrԫxdS-qK+[o'Fj6pgQ"`d1$P'y6xB:sD*_We6NVek{`]:^5I;y]p.Rv߃v@xrjK {c-XLR3E/;˜LUbI{2QZ·Z~'}}M&z $1d@ƍAA&^(XӥL%k.cb0B3@sVPFXp ns @AC;:QPggh}~w]C3|SA_a[G^$$VĔtHlݾse˴_1sɭ`Y^YjOuLO)ᭃMO FOMje`G1!2 bD X~,>fgnkNS7kPu;^ a!YpX=BkpXa %pXa %p؃0"c0׵K"ǟ\.Dy Z<&ε!,Ҩ}Ln+SoE#51 A/ 9"IB(kB.0f tPtdLBZ(l>!!}$hliˮ Ѽ{c.~>ܥ'de_뇲ʯT˜^F08Ғf꺾4Ŕ'E8);Nʎ8);Nʎ2ך6WUf~h)UV}8i5u^^ӟ]Bn N\ hcm=T$T$I4IARq/)IIA(JARqT$IARqTᑘBh:oD;pIXy>/9SJX[`7NJb\|{0ϼ{yТc6wZX3t{Ѭ#`d"[ *I#\[CvnG{ETE0 @EwkmV4`Ix]68 Rb&y$ ViSe)A@\Y#}ԸRzG.Œ.l]'ؒak-bc&`=?^ouĽeaZ6ha]b]2Wb-#NSSa< T }0 qHx.C:v;"E Rr S")Otf w5b,A"A( WD*%Caa/ܓQU>PȖ'7Vhpk'Ku@ޚBk+}IHQz3tikzAkPi#=LyQʭ8%qN1"QF = m/' >'m uz?m1VNƅoβ,w4JEKq? f]']~k=˰9,e H,{ =ǐ͍^$Eϸh5/=g-"vmCFS` G0"Υb:f: ۿLg9N|~oW  .85iaaER2*IJƭ>̽?D K cqG&m?u?llʥ|׋Oޔgg`` L!8(Q?-, H.޿յ\ *GWvm~+ -\EѧYxci0cyj~|4F AϽa]^UcKjlLUp4|{=+~kP%)1^OiD45Ygw > G4Jo4,zf;:'1jJnN6Qm+Ldz BF"ah⫯!5Fwp6Nj_6Os˅|-oԍh0`?{Ʊd@_r^{ 7X\'5~Z)!)v3IQDk(13~TU:*/ƒT0RU#p|M*?{E]keM6M ەHAxyy_~8{:{KLL% 0i]hQ[E?hUPuL-&V:|rEMP+Zhhh.oMf|Hj\td KTΉ'jMS@ p(Xp.rj`cPax9r n秸@L_=pWb*Pb4 X SpըVdLj LʮL'28T|t↏vĿ7o62|&w-/m{ć=bRF縡5ψ-V'w140];xԈ[,!qQ.CE]6{@vT!u*"e‰ifs>8CBt 5Z &MyL`w4ih!>\)RР1Ygչ]{?a?=}xґ(1!DƅsK$@Top/F݆ Rx^Fbȍ¤IEdFVR<p>b+vd}^2r6tktl24D :#:Xg-lWO5Nk1A+0%_=$ǐs%p\j+6l,}̂֠"ŠZ~SD}9wjb ? ش,g8O %Ϙ$3ccIe8'ɹ{buN?,}lU|bcsl41\eXwu祳Zd0.잨ia:B Q*DAw".!+Xdڠ5"tÊt갺+Bk@ۭYReuǂ% M?Q~c#mVnÍc>%]Ф5^er1!U7U}fիGqS7b |b?1|Uj<6Pn AP~N0\Fnnc*RBJZHI )ia~+fRҢRBJZHI )i!%-RBJZHI )i!%-^N!%-eWRBJ, RBJZHI )i!%-TK)0RBJZHIRBJV4&Dzϒۢ{zϒ{zBBXN!p,8BXL82T=bsg) "J9ɕI1HdlH( 6O9W{^,c7i]e=֫]Y^9~|`˓mB AU'~Aun/H y\>|;E"E%( 'A <ϩ0uE VC4g+T>/pphHd"tJDĂ)1Z  ָg63nn|f7]yne}l% эO}ͺmym1$yHs0Xuzb8@WQ,ZN2I> ʹB И!#֒jGcg=?}}_O[R2Y3 ".D VƩL0K eE tπ lڟƽY\7~ 2z.`޲晥}}n/E"xCDrY#!Dc[V[\$.iOu(EY0B׎1p6;X6څiĘTr3壣-/lmj: }4z_qKy_N6#ՖZFZVQ_/`9Uc4fd7-ۙܩ P"rmd9y5O]v"Psۣͦ{g4yo;y] H h=:G^`'ə׆2('>׊Řl@r:;UAz4;Hu]J-E lF8$S %_GFtZ 1ӱt,#XFezy rFDܱ &VFmZ?ҏә @0o> OߞV*Ƒkz j9S~?~)+" Ts,"'"*ң~4r4U p{;p;XDR:f6{%! ьiyW><>%KGD`QCY2g9 5tBhNZ.HMFH)0枢u,ޯ$*=q0/5&ꆳ۴ g;4X:rp؈f+COf=Pl TD,EV~ ^ћHs?x [V )Z(V "x`RfķE^?Q:,|QoAk&Yxċtf3lh&ȂӍ@H?(a% o-vYGNI7hYbyCGCPk0N4>-S>Rci6hJ8udLN}䳟k?pgV#DghQ9=??q"/ҍ{'hp M^?C<&v\qV?~Mkl5TA|umCS޿mR0ȉD q1& 뙱1`.O3&q"Os)$ZDSoY*Y=f@XUO|9w6Mݓ3; |?/d \Bws~B[ܵD]%sHVnXח`]!5Vg?DݱD8A9ix'Aip`]|E%N`o4f}2ôfG=U[D;" \c72u Y=ˁg9l9wpHZGlK*/DmІ˔iBHTa`iz[fZ;Vk:̢%AYa"6$0/ilʚO_}MMxI,,D SL2ɉd(gbCڻ!_vJ.c/t#j .jdb)%-g{w2>~{Z=NkK9O3+r2S۫;V`'ꗟTv@>j$,ةPӜv/R^?4kQ wՊ:TP$/cNA#|^Iv2x2͹8}fjG8:f[t\_]ΤW7|~YTY֠8[͚CCTl=Ob{qrbt#" oye:fsuKs:)5kN*wՍ6͞ǫ+ۇ?6w~o.Zk*\؏Aޟ?{Ƒ =8#" ^ىX89k}&S=H8rd& ꮪk&kKbmBU|J~'P%+)b( ȐJ2J2և|XO)z og ],h#UR%qIVQ+L,lH]af#0yxзE c'm]|;uAT:=7߈a)pM :)_ 5QI79^: gҁ5 J/+:n:D\iTӥ)]廋ߥ??|%&߿G˸,rVGxg~ݍk Һ.@oF*iLtrݟWr)B>RŅrtyeWz>NaX,GF)=R3 NC0bJMO^N^0ҭd 䞀3x IdAX%IY'ܧZFT ;-` s#|Vю[^N<і: k׭ ulL-/]{TrpeʤSHҖ1ۗG+kЗ^ׄ;/?*]mK \3,g˸LgS;&OyTFoX|;{FI@Z(݊5$)#4ITƌs64fm%+uJI-cbHQEIiNqQ ƝwıgVx+X6CϡaCEez/\/-&E*1`9-RLp7\3諍A̽ =w; Sz6Dɽq-c`9F!8޳H#QD(:~rjL[[ 'K9KI%p"OŁi=CEE B A Q͍ &RO5}@;6 PH ƥ%t"q)گvj MDcAG:ZBxN]6)NPu@}3ks:f0%T8 M]fN $)-ܤV:,,xGI4 w*û+)P8Bnfpdi~7plIaxj~M{-o˃k_.?'~-==CZht&e|(DU4Uum'_@W4'xoL<8J[aN\iac{٨{ˋl|qLw f9~t_>5) l.w_H#S^MOO` [yb?pRnAO Cn%BJS;|vQ̨o~1[ӓK\L*Pׂr=i ؉c1?%`x]}7.^e ; ETwK55hZ۴~ LS~, ˜H u*gXoxk,<N| ›Yف&ayt)N|k"ґŃ,6.ˆVGbLyq_q?q+v\h!}ss~8wk (&8@ޫ0ξyMxZ(ڌ-2m|RDct;:n7*{WXNiX'WSVJ}oZ|؞ 3$B`a?PVYshEǸήgqU10 f1crH8&ꄺ^ZWzexI\(wp U <&՞6#,RFXp NџV=Xk]^̚elb2EV Rh׾ԯY;MnJ~}D3v546΃pn;KMU_ӘG"\leF͝QFGR :dP{L=%Ī21Z Ԟ9\uYUG{9ev%ɥnV$8eIz L!esG՛8u[AgZ2:^Iޗ`睉i.yu 2/9ʘE4ȰT2+gVL;҃Kx(n3r${4&[YE m+RE'i3轀lm\S$͠%RIJ7'A,D8d1Yfdڑh:[|E$mHm! Ayoi1A`kRqќ6%9nm%$^4jzbw< r+f^sͫǺqUdzٰ)@|6T5^Mo kmCL%[:t/Lf(=PGnEH}^ƶ׎Wrs,}hS⚼GFـxNc|V|kWsi1#$o8].G]nMMzZ_ow'2 m u ҍ.>nt2wBwℴpҔfSVD6zk)kol% QWU}ZTa@0>@˳edzzTX?V^y84[a*G tDmc` nj>asIÃo1+(Lu;"E RrHQ*̀!wZI%XD0`AP,@2A`X*(Uȧ05dƺ-;YIk.].8εG: oM \R2'?;MMM[Q@¿*0o 0 &IhG10_THP07NT!U53zL4`08521sϔ1Dn(^Z8%qN19/:&7uL\ƘVgư5sj$]-Fƅw~ ο' J>:B;ܝVѭ ]*3n"s;& ϡ3-H/A3e;(ESgőLŋ 8; +pRea:RLݙ)0b ,u6l(Z85iaCRi]se 2iPlj}ŧsΛXepr)QO];ˏ@)0 [| H_ZR3 uRMr 5et&R`Now&_O52Tonl3R2gf`]]O֖.:0p;. _BU&w))&J*241c#eeFC>,'XB Ao$FT zlUT) .R0DÝPATBoD尔~|&o{K/_ 1,h/zJJJhx3(TM(c+M}2^>zk0a|w7o}D]^7prg,pwf;0F " fb[ƤYO|)Wy.rn-DO"# rJC4$xNRI ;kyaa"Lp 1`&5?''hlwJk m|KPJIOAk FD$J*Y0pZ s#|ԩRю[١깁tAmfVN|q'^LN]b|+[Ro+[RL:uڀN ӀJQE%حx^SF2BdAe8iCcF-VZRڲB2}} /E&(RE JE85G%wjwıgVx+X6CϡaCE%iㅑJmc{ĔH1WOp͠6v}3j4 ԗRʸ1 EY] L"xi?9H5fC8(N#"F)FsEhR RYU4.Ť,H")DM7{,ռ8Z\خ5{1@I0N%Q]UbL&qJ9J zn`Pͭwzvmuh0tt;{{Z+-(W3q4Tb"°,I5"+HPFt&YFBKPg$Tg٠N$hF @zYxaBa43 V""F5g"mNXN^SA4}7lyZ~g*[(E9K]8_ 8Y|sGS_wueݵܻڷWBcC4 `8VރS:#WX-F ΌbIfGDx-.#Ty7۷aߌ`eF V|;7/c0PKoD_@)릎qUxHr*TĄ4Z c{%3k0b4n1n6oчJ{#dCʑ g(SfeyWfe!7Ls(U:2 MTr (#_&UDZ@d[ YfL7´}DT""*p墆RV:>uOE\0`GyP(>CQ+놿ZzvFjo#z!Ei[*MxB,YJif(S.9*FC4ITTf85׳#zSkNp 3SZځĔt0c5>`1!GReu Y͈T%3d?7bC:ǽt1;Ga 9=N '_ [ z#{!GvG"u%mq#>~ g'9 E+}@K0~_"`JhJ$Ђ& }Oܔl'@$$Ǐl!=!"'Z6~OCx"Uy'2Z'"xiJ)E^_m ~x/ ewXGeլ?ߚUXrʟnl%gGG]ob$IĔXO(oѽ)*aKh7!;Z"4D1E [d7! xӃ2 X Oq"m=,!c;~s3s2/@bWtts2r s)ô#굓=hb2HhWszpA@XV?.BZO0Zj+I=X1nRŸ͘;m|Oʡ}q+m|qPZu'=m]*Vfbmzu{,Ј^VĎWkE06ǘ [ z9eăE=-8ltJ'8wo/='|`tWh @cėl"ndP+o@ZI(V޿{/suCor"D0v^ mNS^ڛF65OYp.^IG4.F 'AjhNdom=z7OLmkcvupҫ>5V ,DjZB;:e*3b0Z7'b"PH'QI>wmYO/8 1Zf{IX\_j`lT΢[Kx"> |廬\-ݟ lJ jEunb?[~l/+ͭ{=Y~ӭk:~fp~0`ڢeorhPO'8鑪M7{GstoՌ;'4E|n:by\ERs8A1,JY=-\sso+ a=Lʫh/_gwQ $nvT=7+7w:ªZY[3XZvVӞ]>\HJmUHɞY&ݺ+\[LnړN;?%1?+i8˰Ld* ?ZT!JYb$HgVD+rb̧7_5inEGgA亵&~MSsqVw<\<>h-|DI^RT1wbr~h?_M 6ׂv4 'f8I3Nc)RRSfS f_lO&\ue>j<ʖֹ?ގ\i[yFu?Jb8Q9b0YRdfiHA [H#J,Z8'!FuR8aՌϭݜwiձٺ0ckGu$=Q[4{@0ZD SqYbBw7$aBT2Df?g qɺxO4ZBS1@Nh=XFe ɘ(IP D8HH'8 Z jpj2i9J(iB[;mT=eݒ ;=C:<+.u\h{pH3"&/zzb̝ǾLXWuTlQ>Y33o4QΪVM\D-.g .%J YNO?ߟdRt}dh_tM3Q>+OA`L2E] 81肢?Ę~\:^c6?_3R#2Le pOf,VZʘ҄ũ:KQɜ ''PtR?@Еeaͽ6ox*Fk5JY %rtH"v iiƘT#CW{RF$4R%%H eR,h K&*"a֩L7n(:Tk[ gW` Zr 3ҟYjuu*pYM 5y2\n~Nx67`l#*Zc;vn#:z!tDnU` &gSt LSx$1I:qoM1(HDCE6D| lCBc´!J;KWE<"%4NSaEB2KLr.57)DDn6'UUp墆RV ń9ymV=BۜE΀R|c dRK$(.%OӇ͗ 8t/U_?EYzjB}?`R=d|)$Y_ 8yU>Ӽ^98Q;%B]NW }DXC},V{L;I͋Jl,MyR9PRIy"\ ,ҹgқ>3O0g$>2 3S6V& <1ЁS .u(%"93*]XI: s0 g 3h~MG#fr\=ʛ_$^-"sku~ٸ>=uP_/佔QNe$\dbv77.fSPиNS`"qnLIbc%Sj6зqp]moQ9c`Auc>;S>LCZ4 `B8mmOn#z)Hy̋+',,*tnf tŸ*WHG{f V#0 H0Ftb\]T}]({_OoiSkKccI 5 OlF# j;%xjg?fi':X)bؗ22djΈP3Ǡ(!inuK>Ųޅ `v>_t>jӀO]yX+gG-"} B,\+B\)#TKR,?i$pA2 RSǜ8g2pPxdI^18^; G{eyA0c, j xWlht*x}!8?n< ^nbr[jVNܿ*=rЊv]|=޴oZg03@6av}mpg'7<^NԬ'cxrY,Ȼû_~zw˭'76ݷ8><߫\\4_R 2ko\bvm+sZuh)@-mOFLuXJ71g"QԦ1'ie48B˝?-&O~Eϼ`OrzQl$3e]NibEU+Ă}}+O吂)<(d>"YSIN֓`XIz$ ?ֳNϿ=]}=wxP/ӇPIO^xC-7o(|tj"sE]t_M@>hF#t$T>b#ߧS$&hamD)Y)>VE,B`*Gi>m9μ1;8*嗓ca_7\c}瑨:^DM7-Q?qpmJ8ZHZ~"+Word)ԡˬx PFԁ8 Sb2+$wCi+ ɶL=Jj*V(iK3gf{{074D.|={OxS N\% uO)[-ypwySڝ|t)x ._/'o0oUy˒`!$ŠI,fLu a<7O0q uw;+5a}:`[ xH8nĢOˬˣ˥@bQͷXNnXa!!}{C?+F>n0P[ +I=?{XnO`b`X``ݒ!濧xΑ%˒H[X9&nITE+bUGgsx'ݾv va~y}xbyus~\f_L爰>+\)%Ğ$Oa\»:1^j37Dv9@4LƽcrgM8YkA ov'R٣’+gW3Rjn6WxXۺ|խ~ڜor"e$#7!&| TQ),-qD9"x Aˉd?*ȶw.]+nL̯2:Ղ{'g8~f_KO+dzW/;=jsTS̭z=zuʊ5*냡Q'cǴGkW ]iiRraԕ'g\?F޶:= zJ>.Vh^V `{3PaE'BՀǣ81+8XB3Ohx) eXOz3} ϭJ8Aavu9okCCa,.l}b~ qbb|`[ reS2$cli;D"y..kaa Bf> diz{ ,W*9ˍ|/ tp5ncD;6]/^m'CIåeo.&p~u K.Uq M0ծT7-1?XW/Ւ,5F"Bd_y%Ud7jg.MLYx>QH~._lw{;a$L)cb A M`\cPNo 1. ?>y0Oa762G,:*;CDd2[`B P91y Q35N+e -+JZ3UUc"xDe 6WPh7g@`R2 ir<$b0Ne -6xii9ll@L8Ph|.֘yTtL: )5k %JS`B ڨ؈ I!j\XC# GDAk(`^s)3OÁD ` .Pd^cRfB 3=%8hG<6ӦGn*_`B m)BR4Jip|beT`k(4`jZ:*"퉈7 b-gx_50IkkϰvGL5FG.0b6Ђy*tk?HNYr vF WPhӒ G v ؾB)f58 & "@k` Nk(4aݚye*Z+DcB WPh07 8dp)5] WPhS2D 9-"ыHH24 MorR`Q .DX h fTJ֭p<T>9r|~;_r}Ewٲ7]N}_u~׻zwnw(C}w,o'?; *mx ǘ8!g-N&gsB$XbԿ혵dZ#;o2㰔xX4Mױ_>({&ERqu y<) Č\-V81Xf Swrq;)EUT~n+yI,=QD;`E`T1r$Q̒PȤYȁP O_?j PM狋34vkQE2?X P@i+C"ڒE5yTD)'dEsV<)@0%$iQu1#M-R@@SJTs"bİ1*RRu!#)|UZߞ/#h pB:MC)U?D+Q4/wZ4-ibC .$jM(UH8?6yU',k Ĉ1$$S ;{==0~ eFlŤ*TDHq֜k퍢F{ EcvD4o|D[0H[0h\S;gv>-_=؅%6`܁`ވ% _F'-8_Bi캠qy:R{0{0}wۻu܈{&t\s0zg;$dga;;lέOg1vEݒEeʢ>0>{iaL4|9U5ڨ]oԞZ;hWpʕ5H 'A *۸ R,0:LS#~Y]| 8ޯl&MŸ㲼"`=6Vʾ f˵6s:F +7ط:}e1U=`cKuw<- FFC4:Lr%N1WuTx,(Aض6 b>K{:?{xF%nwH>e 6 e;tޯ7|%>l9,bz{w~v킅{si3ow0lԛQoRoIW1@Bn 55JJa)hIDrk-hXO[Mvl+|bAR/Kar~֩(Q>,~S/{}oC,= p.6^;4sTQ1_O1"QC++֨RFx"ˏ^-NvI=˥SW}$y6.0) `H)9Y#ҽ,f{NV]xN )>|xtԳs6zYxJCr*kUUd>1>!oHYb2$fxcUH<[vB g'kXMӫÎBxPr|p:mtplĝΦwNr6Qܛ6KYoaϜ34xq/E+_"\qX1tn)EZ:րMT˫f\V0fď!]gϓd!+sb/h^Km:!Yg"`Fp)喨mKͬՏڑK!*bc$ic"0*7\ .|/r|Tf>xa\] o<{ȾnL= +"Wh8p4T& Yl#!#][s7+S5.d7[a>\4$)mן oHJ"%]t7n4M 1%"ڗ>̣,3٤3 @W|u Aj|BW\"ќYn) Qd`@4XDusCTE hqq_+5CU|Q۷&ٲPgF7NIJĘ*eJrRٹкdMpe ?nizA$"kG]$F"f4zXF`xDECc'H Ҧ"P"wiQd.RI%IgK+v rAˍJBu fWGqogo*% +Њy+ &"Z oF1|<ְ,Yiw`Z5[1i 5qүCsN@psҹ~\P xk1NƎHvlo Zx;:w܁@VypNګէC @ $PYmeL(8kܝ(^ʮsܳǕغCdΑ c|TlUE+?<ϹUޒ(DbʒyrUgqm˜Ob">MXՆ*!>! [P9SUx0H cZF}%b2:hb9uۊK]2Gf XVyEߴ2xYbkf 7{oRߟ iqn8]g8$'ڈt:2tE1WE0qq*H2[IGҞK2">#c<ˀ}M !fShW'y#K.rr{r;b :>`ɤzq؛seL+B:׻mZ wo67M}37? G8 Qlg;F!K۷1}lQT8hat3oԺvH@HsxoHv2eDŽK? 9rYZ@]*#Xyr*,J>B]wvG+SiU@G\$9sY& qsk=k6޼&=ABR[_'IrqXlVV)k]\ i_S-Ұ~u .LXP!7FTY "{unjGSóq|^ [=qqѡБ|Y2iS4OpEo5I׾تFD2:ezF߽ۅUw5DޯS}Y0cf6Nzb3QVU JIW }y_w7`̄q3//{y<j#ߝ~wB]: OӁ孙z ("w^9xku;-PmumX<9Q*0\@VβZiY㸂P``%?}qs%+Zl䎶ʆIlS<,E1;Abv f1;]hps݅w]hp܅w:#4.D]0_u|]0_@8c+MH%E}]\iI/Hʿ i9UmojhD q>_I_\rt-XO \G]meO"j.hqߍ[uK4W<2F>ݸ&r<*0=l"6 g=DwFS_>l6Vf@+us+[|#;KFvgrpے&Ut[t: JޅT Ykäen# ltzqy\'l[[sHI7ٔmn> Ar=F&)oƓk-WC67ک3:F_%H  %0TwwjCN0 '?;V" ą= f./M2ڠ W)@1/ ͷ+(feӨ)5.4gP[_U\Wٰ2kWҦǒd|[ɲ7n7TA҇*&h`PL 82DQ1xΒj=kl$@9myz\Q8ȏ[ Zqwk7kI >7gl/qrֿ;d.Afe!W{Vg1a/_Xzիe'/ IKjjs=>limCeLO=|ޘ|tR܂ t,]X8j,L~, Rt1@Q~$X~cv<)<24i!Mצ!W Z+mðMX?bn2$20/}m8#;J: Ӿ6վ&wqzYi@92-j8-9NZ~}loл}Q"+̥NynS1%ʣ2Ftq2XP!2` ܙ{ 1@/h0Jъ%2m#ǽ1k3f u}}<.aJwBz5VNۗgЏ+zi m]ո6jk0|W+2'3"3:'q0N,B2׈z!~]1tm".LuȜ}r m[޶_5Qy}-x>RN7h,cJ2I=K6jc:G'1Do'["^TQA7bE vQ5n F4z DV%k_*}d%l,ub:cՏ? f4ĘRd?AB28)x/O6#F:*(-Gp8XNѾ各ֆ8DWmjSy(NKGfUI^У*#yWj }JB2N Mn)`bη*Cy"ok2\3O*,'Ai~w '.Tm`;8rK%8 A1TB}hF&J!LNt BXRYﴠJ).Pq<*S焧K\kxD بmaH: M n85I*ʃB95D%Uuo`U:/&rS 2B^e$8qKZ%cwKs-MkiJ?!h a)V "l7unw@jH O`xB%!L1 ˅ &r 3sS1!Oě!t^̫kΊm3w_>]3V)3=}ktRA\UY/K;&?bṊsg?j(hz0qC2GpB5/HEPN59Ps;ݢ:}SNMkI䍣99t\>\RΆ!z.3m> mvZЗIz)~+m%Dk{=4y^%NoO뗐)@NZ넣jlj7;\'.s\XܛӤu1n=rk|ƛɅul`l6+7H嘶i[s׻*[dqqiIƞ@,骩؍"MY> hC +T̫p6%hgwbr^YtM6UpsLOb$w,}1wX S)=pT=?Dc9Xz |6ͣVmG],cޟ@IUW 檲lW5?JS Ʒ ߻DBw???9~:9e=\r[4 Axo}Bצi\]S'__bB9P>捘L4_B%O (A+76H WƁ>PPyP)XhJ_kY(ļ{ہ }фj.&%Bc5Q@ O&,u`!ɋ@j1 !p5p&<ᢢMEFV28%$Dl҂Hrir!q1rM`TYgD ~Ƹ:8c V.K)(_=JQJp bNbeu; yݻDۚy;:w܁ 3;+ v樂.Z~Oׂ30&I@K@xF^^/ r1aɰ\bĀ61iI &tX* p[  x8Àd/HPPَʴ6gvI2ZrX"G1O1\1ҡs5n#tUVVJqYA`49LHYtY/͙K}vҜ4g/hSpgcА#D3@M:r,.k#eD WM2)M )B+\dRgʜ8^ LkfNuݙ Y~C3(Pu4Y|3 .&)tὉ6 /{‡7=mҬ5N9(^cƬv7]Zv sxQ`z/OWw7O9it̞2{6ɊzxQ˫7>/ړa<hoqww;"a>,Ri8̊`g8[H/=4`C~93gk_}z>E_:4bKӺo~1!dS9獠'ӂ:ʏ`tuU b[~jb9˔x5!ޤPU'eW7ɧw %L(*)=, !dTd.]b&22)rb-s9N !ohw inX~{//fzz[!>݄UalY7ޜ-SS)Ȧ\6GzW6!(]涷V=?<͚w:e;| w[4SN1Ou zuyb╕bg1,?\Dիe,%m^?|-mظ7A7朏ʝwy'G eyLgp w|oq!<|enؑt!m_3yHsڂr0AqL.gጹ9+e293,?Ghɲ.TZ[ @'z.Z/K5^^xV.nPԤq88i!6T?̣D`"d,rF̙K $k%Syi2ˈ9U]RH`޶P"2PmRnn$S 9Z?r"l:{[u OS;)>~OY)Eӱ_efo^Rs;o~/~:: |E<$ܡ7!uR"Bd΀`)>N{*sa]׆*5~AO InHR3- JFOFaޞ0YimQJEUMǷ6d5!TYdV*r P;>Rd N2y[x |}o 9ʾo`Ho{RAARQWRVtym `J#(!@S.Ig+"3,ew`fP]̳σ A)`)*IKeP0뜸  ZJPV7vywd}I[pn =LGMsBkk#դeJ. Cۂo;DdN@F"qG"`URP)"Șbd!(s>H[i[o{u[:.Xttb! -ʹhHv2e[\nwWatP*G}uUTir+BB@ u=7yKa:& *EG|/N̵297Zz܌uXXĽ\yMz%8~(AaՃӶX;w"',gb;!͠ ꜅3TkݴrfV ZQe#0 e?_INޭh7&rd],rJlBDvu*(NJzNN-" c.8SP\!ϕB.2Kx ^ LkEvNyY{ޑJDqr9O3 *Ci\,E$̛ )Euvy[BGT I#vS2OWQqYWa\H /=+Z乩M˩s)l}}||)âϷ;?Ζ&&q[PN@~y|l* 8S!7Υ, km>\ ,&ÍεL#¿Ӯ'XVnޕhi!ov-E47;y?{`}dgj:,``2!E hT0*Ÿ́Y(ʛ1h]6`}@ʾ# Xi`b(ђ2%d.ZkI ub09Z64Q@ӱNޑy3%A$X.%(RH!: ]g,WLбsiuvLM{vSoV6Q{7>͋Yf{Dd 9S-_ǗhpmTH2iTeW8NʙxZ1'u g4"ҝ5zGٹL :2 a;Z|MR%3%"}f6%`F)Q119$Ӆ*&eoà.HZ樬+;l+JG,4Eȟܟ*,T}PaXx4S+!oc>Pzm#&1j+푥`u YD'-M4Him*@:T O(#hBkVH+2*FaR7;EjWCl}d8 29:\2C%u(1:TTr;Mh}8M6g%D ^F-DOrnP◟Kor[Ɛ dK\eieS_5p {]x/iLp$KkBzO]pG@ L1\2qq.x ބdJ[HrCd `q:ǬUҘp=[?~ë/Ep8-yK=^G.w ?۟?۫ЧORK$F"./AOQ7^ׅ$_. Fw{N{K~Cuu`e0 u6үv-H֧"?-'w?+ƚzړ{iF4vcgAϭݬ ˯ > Y,h8_Ldg?r^%zmzW-Y0BGJǚ篣q,MM7ʽb6.f[[]N5ǥ!?~4LOb`/Xu ]]K蚋6z>|~ٯv%&&ĉFdiGA e'p7?zIa7~6߅%m!|9&L8[*RCwNkG֥s:82+_*0f #o g}; 䴀11m괲3kM&utGyn޳>wkk,wL/]M[pU_: ZqK8K5ozK&U*1Vdd> xrc4 9n>yF=eK=YsVqY c '$4OJ&` 3)stc Y gPI+Ayg+49eHN&.Up8 =wnG~?S;AVRioTT9V9%e< Lށ;EcihJ)$L4}"J{-*. 3Fk0L%$K^>ۂP7CUN3.D2 "8aKe% E Lu-^s֫s|tH*K¢3N5| Jy9\C?fX2BsCcgccv NۂQ.xp|Fxls(9Et|;-}^PeR+os4JK%>pTZPr茓a-O#ČITJIU`)ܝ;%6| Zcaacu&fT]»W^DE'7vϡ/M6Wjtdէ>P:S*zQy-vlBJΒ7^U5/LIkotQg1]h~]7YWZwYXae9k뫅22k}6tQ[I7O}_GKhq}.e MiBp"kdXPr:Mj4u]9KoYb+gP%n} \>F۬Hf]4)t|*;0u^:֌|@j0 7\Ng6嗟qxM4X{-XaˇO0KM ,Rg2D'(]e$k/%%DJ,s$#m&m@V7[z*-@s%VB Jl76(Y6 ja<,ttpY2 fsEڀՁA1*3DI77ujcI[NiKm[lɖv6x\t]v)qxZpn!טn,hq>o]#-27Kv`㕹ԡT{\&h'|&A\RTdR{qhwUqgiWTZڬS'ڐN:W=s=47]vG&E݉ے3iJI1V:jb !FLu34O9bʿaHj ٶ'xܝ\mRFFJumQhQl:y^u+Brzݹ^h"Q%;ϫʓ~6'/{I‰<]́ pv{ً=xRUx-؀~'So49k_l,0ër~h* (|U\lr_<`AD:<Ncʠm- ▱,ԭ]zh#Q7߽;,MB('1"{7)ivW=44kxQb бϰZhU#BnyhxNE`9$HCa"]7+KEmzٯcd2 ppeSq*ٟd{]gz!2ٟU5  0oX񲗽M,U޻0}9c}K8ߺ7P~0g Pq ~q+M ?L&ym<<GEd!V+qRnTofuJ]E\?xTƇ:GpwlLp ?GS*jLAyN9RGS?nShX5+|{ [FJ#Sa<#Ȱ( |b3B3.j*THh(A"aNF0)͸J @]X@5j%DcuQ![&0,5.XD\{xJ`[1ؗR݈O/xQ6浅*0o Q & D({*$ H>R4`085 dbĹ)/bPnNlqDKbSF{KwY:[}NVDťqoa^ Ra:y [EѤ(Uf E̫J~Y}J ˞FX?8W@ي"-dd=@H' SDzw&)pa׽b :*a@UiC(X9$UX]Fw_ü nt1QPUFk,>cP|f`+:4%)K]]_e~8?xs;&g -W|h s>#U0!1tVOgC4zsͲ`3R2̅>켚[sd$F/Wt |ra#C%|}KM͐f$b,;qɟ`,]|<^&X9L7 Nb*A[ զrE:h39_*B6Yb}ٸ:^[:"QY-_K3W 3.x”+xrQ U^ia(}5z9`߾~o㻟_>4ĸ ύ:;4jڛ5MۡiLu=ỴrC]mKtw᫣Y!s>,Gp_d.vw^զ7G$ƇɊwh(Ed2EF :]d@S0.y ?NѾv׳`qDUZ)ހDIkGEJayCVuNZVk4qkYVeƶΉ(7 ::ӣ8$_ZY~-y4)#46,rmh̩%JKWhj+W Ҥ-|rgk$USH OG%%VB!,m wWMMK_ۨ3}5dwk"¶/E/PAdmnou$I-${_)i'u=%6a=z!=YВVQ2] W1ИpV{`:FY^1Tk%}+kݦ' f͌owu~;Z](#BHv\-&x#32x5sf"T];kޣ4U68UK[;Guây߆Oy<*a I7 j-$* %^Fґ0хTm.Y16S@w4H rڐB`#c}9Ƀ4 2u00FHHjbKPN Ij$f.9siG"Eh.JS7P1Tp:(}.v8s-lsge`p)xT&2C;Pt<],g5cY! K)Z5;BVJ>WbPB`V  =AJ{&{c/kjUQeM&JKkQJ' ZermAZl[[f&S%لorON4yonUz{kUN<wKQ&bV}T.NjT RN462ׄ[Jص:r>i^U5RF)|ռ rJ4~]x<ϝ)ecޛ/ONֿYQZ>v=݅7ӁSyXab /]rF(P jo:HA.#O{ɱdR 9Zr;Afxfȧȹ`1X>N&۴#lh|/(v+@ d $HHAvXwad8^3y%X@6eMlR%*U } qS'aȍ_R{JvpWpZuBS&Bi;b Xx:`6-4dsz@o'X^:Cqz&ڼ#5%l#Jd\%mw(X6 f+8,p~!E4]dԓ&byo3xn29XAp.iӅ+GI 1R#I!l\R>JS, bIC{lYH/$JnIN9 a^4g$lk"K [cI"!({A"XH>6$RHfrj%ҀvRlp>UY>81 I-WB"9&r6#a6bD0"#a*&+'RHDcE:vH. x qH<eN#X3Q (hA fiehvV,H*8IG[1&b"F s#@UɸBCGпGK{t1U I`/7\!*-yN `@@ [`e*)6#[_\? jG[Ÿ4m@֥.!fS Du 9iOEmX]|w9=Oծ]mMQ 8,sLrTsN\Fۻ[^]bOlfFގA36BNBKd&l< z`BO?ƪeH9Zp8tcjNC`;恣tHq T;!l0z$TP"R6: %K큘qК+%0Ua*b[DB%jS>P2 2s1Iش)mr՜kNFAb%oB3H`Wނ2Q c$@:l "+@7ՓS KNouZ{J%Z$yz7fJtC:ؗz9*M"$W²je464nׂ.Zck=b7QWcg<[VNJ^*d(fI $K=Tk LS!67MCZ6YOg r oef{7qڂxG tD|c` njn&.@hFp!wët j TEU2HwD@z VAfi%1iK9$۷tsǂg Y#%@px\*Yې at8!N뀼CA[%%HݓTS+F"*Sy^`85I$ʈbOEDtݰ[Fڥ 9) zM#L83E ʭ#Z,2}54lkܗH:>Ͼ ?OS9z>UOGRm,8Ogfa%lASEF@ "Vٚ!DRzɌb@n@"r:ASwjfpi790_%P0 *:mqkBzI6wQ!(( C5?x-$?ξlb !RqcWyŅ:0)=3(>sKjnkNl4x_50zjjIƖeS3Xc3ubyLh\ȧq0ihx٪`~$FtA64VNX,%9?bCjRR$6)ePƗiQ ⵔ~~ȿ\s,Y,IW2 ZV.z﻽잡 ԟ!F &I!ۋ7~{o>b>^_ e^fiGx.k=:p.~|@ӺeTMcަ]z\2"F̬#`xOh_uݚ>&V?< H ;kyaa"Lp 1`Fxo$=a)]}8FڑbqjVi,BCz7 ֎b,0-97u:ԙXwF;]]]`ŶΑv :ـ>Q;CD&~ig/b>a)6^S E*utZ+n'cYwy5H}0Rʸ1 EY] L"Xi BSSb'N Łi="F)FsEhoR RYU4.,HClHEd&=)g^~|uq<v3X2 삜 # }&̦1bn0Q*DXv{{'mU^ΠD@)RCվ(gMv 3SCsKFD_ev/:EGX*C8A#dN"Rb-Ĉ"=3g&NXHI@zׇ KI: [bi1g;Jaظs+v֝ dt|d)w 6ىL*9<0Up}E3i]R(h19LG70xPAmo<})`M|WTl7n1n7fd#!U%Iw>iV~iVrG]DoTb*7UVq)sEsPx!mґTsesx.)k>Yv0Ʌ﮻6\G_FQCg/v6-4 |r@nQu8v尮.m ت)[P뮲 -='y {6e{P]z!=eXI\(wp U ,&՞<#,RFXpt?c]]{0XvNÔL.{49 [L[ڵ={"GdsB6y{hw"ȋTk0ZfXYfFBXnĤIp}V "O(#p0a WȰHR[d90E#%8$Ăx~xT8!,$z{+%R8κEçi"BN^5EXetF';˸ OQoHk_¢5ĉ݁&Em2[s=rqo˛}=-I!G.L %ZB))& BGm7 ,3A~.4KQ=w䮪Gil-m+ogf737Κ'| 7z8{Ph]7 j-$* %^FHHL9k{/z-8,r!c)f RGڐR`#c}9Aq< 10BB23uS/Kdu4(&d5{N9Crڳ0EU.~BDw'LPm.5_EE=>≂-Ppq6grl]O)&ikkn#Eo[m/ڇΩyٙ}9/,9 Ւ,S۝Il5El@SʃqZR S])N2!6< k X0A+哔VrSγT=;J*7r{S֣*lXgR:i\m=u߇݀7:6<"WnQآ#<:ލ3 D.N*f #γ$۫@hP_ >6$GD9- %p=l !ĜR%3G4Q2ViB".%eKu 0UM"I !#ބji O#g;$ZB iƉW#̻v5~ua(h&BAišO.r~:.0BU Rr 4~Tʍ<+7WHQ="YP  aZ3J0!E=*D(D@Ai9)@)û^#H*1`$VVKQc`po#τ0MS`"m6}=+|F|a:~イ܈ [i N冕ϝ5 4ErvЯ7zw%#SV9L+ÍDL$:SgSgѳ uF jQsXS!0HΛ%+[<*% "BJGpGm`*QfC DAZ=z1лP ?ވ?w˻a۔Ӯ'=l5nד6KJ0Qx%htJJiYU9Q1FCi`hfyar.ԲRsVTY.+3&F^2j͘ ;+,.kq`0NMUW^|M CvEkYƲhr }ବo^ XFj87U WBP^9Op5RL1*Կ-ڦ_m-sch :t)PBYC"BIZŝzbr-Xs:UԒT^YD%H&8{:[  7 q '/M$c_3] E>]?׫z7c+ȳvaM>OiNY޿6}-~![%]M+7g)7TUQhH$Ȉ0KmԚ3ęx=b,9A]`%&o𭀬 ~ӓpppkE [lw6SY6La>o,MF2[ltbY W'V(, P+Oxһ^%n#Y[ji Ndݶ2=m |0h9ixYmqIX),V#!LвP?BJȣU2LW0VEq5s͠pg}qrO鐢'Xc+8|lg`e#[ f^h;j"$5 !\m P#DC ⯆;W6Q>dԚ:u:a{c \ո-ErIv1rv6su)$S nuA\H)nj!L~فXkyԣw.i&z*,Xn,AY1P\$6DH:Fp(!ЖSa[?$8Uݏr:KE00iLkS:iCѨL 7TDZYPm΂cPC?Sl }>S .쯒!N5T[x!Hi#!2gv%k"%KD;۲Iti'Ab&>E3 lT(F D8%IAyJ#렸 RYAX(GhyBIQFaJG&D,#ȴs3)w?YvQynW#?sS.V%.7z9ѥr1ZW Rt㟿w]?⛼ԟ[ A K3C;ۋ/'sgŐ̾nqA 1EEPNҊsÅ.FcTг_8\B; Vy6\pݺK_}q&Ry;W{ΝHݢ~m6B>,<ӏ)D E\ZᐳW#vd>!|}~Y<7&Σ9M[w&ͳ]u<~~|=20[CV͉u?^]O֖.9:Aq]@|q{ i$CGrHMða4,3(Z? U̻{-h1f;09MI㨌lu6ɦQ*P  HX"}0 t%cmK渚0WDuvc?; 9aTó~K&'䨩:q`x[]T)UVlBNdq'tQHJ//?~뗏2~q 1\-II/ 3fCs#v..ʒSnJncB bFs=Ҍ <7'qGZT{=4Hť:e6Pa)"FL&(Y=84! Eh#flc{fڥ9Pߔ'DG&#>:Ɍ68ģDPȡY㉽Qv۫G;g4?Qn ]Zt]Z/)~je"S񀯴S BeOzƴMŢB(H Rs.q7$NuѲ+m"WYU!dLUV.Эqp}C]VH2[#)g($Jy@@ :-QAV"@fN(r.rr|^uNu_C3vy;B:ym\Z wW]M~nݿ83Qlf]* PEP١䞞-`ki}n-: RRFP`GPV}MxQu׻+#)ix .@hTJb+/Yu)^=*3׀$eN|N!h1QFe:%*EKZZb`YWsb}k 3?=?x7]ehBA=Jz=||9rq-\8aQ5aUh4:Qɐ]ӚUI1,90ֶ@<ԙW𐻃nae<v `vUGapn8mQ{K4-Fm SU59@ 1P ༷T,ZG,O]rgW3UHp$o|}e1)$2.Xpff"I68'^!{*/B@3qV,&Š,2?km+GE`ؼ_ !d/E[YrK-],:lS;'A.֡"OUbJ@x8RlU^F51plH1r6(3[0﫮=%2UOMoTV AX,ȓʼ>S]s4c톀wcVvE &D4FeE!phJ% .Y%dJR\oQǕuE52:Ř'ٱSr1YϢhRNCTh:Jp TiX-E)х8㩺օՅ;Յsٟ29yX{;7 ݎ¸7j^Izo@cPi"#{Yh\頄W"jfNhcU)@"A6ْE`xM% H*B@LFa@bLP ')'$# hAv=p_2+1k(!J 6)MyhgA#@ 8;MEa@%d@VZIۥh ہP䞐!4ZEmU[\O19jbha\,y~tJ_ʷ|J3.'(HUT Ad -y))kb$MaBzxr]õq@,IJ+95-rʞׁ +SV*lXڳiX"o5u߇݀77<ob7 nBXآ#GGݫqf=\Vח Q҉`UŌpWyV{m $Dzo('NV2Pa^μ@ s#9Fekb*޳o}׷shs/8M '|@Spi[N nH*ND5^JAڨ% W'Z*Ü˄-}I&X^niY 6&XZ6{[elJ[Hu H(ﵻtfDrrЯ_ozW%2XSV9L+ÍDL}iϣȣ£gB)%` ԢPL)RCa eK%V^ayTJpD.U$NrT*3D&b {􊑳e柡w}%<ܹOݛ)]Ovs(,jn N]/&m a_05N3"J!,$rDcD=>leU2!KQ\8VgMd3 ՚1w(T)WXYWW `ΟXj)&ֲZ-wLZnj,]X$j,N]Yk\a8{tX<,suذtke wL4zmrհ@[Ÿm39 #Q) L#s[ͺx4J u&:iStf{h6ߨu*5jOv''|߁\>T?*ShׁR|PQRnt -O<@2%^%ڙ909&IQSEUPUN[ZWǂLJ*hhweo(=خqdR#QŜ?Ĕw2!!΋Tm.Z˃$sZ.>J11Hg<#jD=2:pBFQN8e%#OAl1rF1݅OC|˫[}V{GtɅu>}CGHoo>>01-^-Wx՞*ыKs_ܦGմhY|빺{f'aHq3'_p04(`aYQq9NDK }L~vC=W.8~}fsNDDWEQ9YcdRJI:p/p "W)͕clv7s(%Y#| /4/|_"MRu8A̮Ws]ϻQ5ʾ7?9nxB(l=O~xKO_HI/_xYe W< +5Ce9[#>`ª-۰J—LPm ȜI $zܗN.laִi+Ab&h=6 lT(F D8%IAyJ#렸 RYAzJh jQ )D Iǥ6.0YFxi炯maRn›.|^̫+gaɯ65e~zCO6TP.F+*:(Eyo|\~7:Spnտs3O,Vn/ND|,<ʾOqB IEPN99׊sÉnOFcT'г_8\B; Vy6\oqݺԫ%sk |;W[4> +W+E~WkӥGJ„ WNVpYKSuZ2}~Qܹ7iΤyv㏵'?]/_f+j&^bޭ킓E(# mI#]6 D4,Ph#~̫wAw;[ݘx4%2~$FmB(LFa]_3H6N &G+;]xO"s?(hY_}tt !jN}k4^c´?ʊ5SUȩL۟< PI~Oߟ}ϿO_(3_/~z!䊳 $HBM?G|ږǛ ͍bhJzی+KNa,+pkb|3f|P&'1folMS16D|F4f/.1x/*g KQ5b2A"A0  (BuNdk$0.U-> 85MyBtd2XCHڛhSO< -N mub<;4(;E{#(7-M:Um @@,v7Kw~ji"S񀯴S BeOzƴMŢBv{уE^ǫ1$n֘!zK# |'|MDgrk P&-NNP̫`]TN[HIBa2Q#dH5K̫An4N@~'&{HZ\HD(9REIdCB Z6Xl Qo!H+)#)TP@m ,p Zn*&<(wlV:\┈( 'ȳg+겫H'szj '7L<&GC$(",蒌T;oe?>|\k.c. B~ Lx!u^HHi1|+cG$cCmͽA;@@# ݁+_O_@HRp2DJUkZO[>eg=[S Р:gzH_!eyh,vg`ݗ‘ JEd'MAӤJ, . ve֔J&򵵩H.`zr-S&D9Rț4s֖9߉R^6B-|V[m~U%}vs.,dza/Nx|m|*_b?uZS-,?> &CRT)dK cA.%¢-v3r[p1IXfq,V۞ 0UEicW6Npf eF Lk yC**x7[%tJc'3rƴ!BYhe2$JUB-ind1"ld7b-jcoNvҴ=i {WOۯu&G[_/-^ 5KԬ^X{ƺUsw? ;.@k՞\V&EVe#8-ZnS<ld"S;4g`:q5s3\}3\^PAN%b'^84`:y$^Mӷ/*tnsޠsZ5V^l!|v^gSޣpG N- {WZchJ XrSp5nEIH.:/5b#Sg54Ku8?FӞrǗGGwIJ/wPZev$NuLB@m$R,LFs " )"q.\8-@aVfnӑ\}3rl6 >vӋ_<%/vevnך9ӻ^2ogu<f6a}N\)D6kSJ(,lm ޶<:J;9}mv*z] b%{n*jT%2X_@24|}ڮ~c(zD"Kg\]T\7ܖAZl뷓 lUϜ\z9YC)z4PΩީ"g>p{q>a=>6:dif-ގ+],fN8vSK{13Z\/f Vjut ̯| Fm\MZ3¹wl~i]_\Mפqۻ{ڐy86HYev%qHJ*)i@WNI SHnlR $ocPDMfj42IEy]$71g[ư`7 ٟm+YC­ywz~>5knaf\ztFF7D$mv?y5}zvd_,5<޼;ݯ-}ys2s9@󰛛qf3W=.޻/ Vƙ͆:;~Cdѷsy=^}n7dӹSiw9#SZҚ1#ek nqo*qz}P n^$Wɾ(v3!8} Nj'<٪M/_0/7/?amS?ffXat[SQ^Wvb2mr~wGnlpUW3kw(laފ!-pRf{%SjF+qUj/Sy[ݏ(#b oMnΫ?-Zxs8e?tVʬ>s{fYIlȊ>|&bR([2PL^$\HͣVE[u^Ykꝗ@KQIȇZJj0_> >,Zv>I(G/:9ʶɰ༒"+ȓgJWN0I9x\M8| ndB'&5PƇDLR8ÒgTmgD*Rǔ9s 'iRwIm e Dc"CZE,eKb P@t<[sH]Q2#9&MS1@)䤀 J*+q3 #HnVkw QA:Eh=4(YbwS,)jANIQ. fH9L5+tTu*KjTccB4|&V_ԔXr5!<)x@i8d§%}-#A9C`sĒ3jkj'#x o < nVx8[4wemI [P6 G*gu l$ql  UY_V hN,v} ɉAW4+ȡ @4SR@INM.涪ٝ}2u1qWCB"+`Kngjwl@+ ]KB@FYu @| A‚z;%tp "Z@;CA-`pՠK:nV!hs^wu @cZ:ٰ XN!:<gx.va}NNL4@@~)P"C W;CNasWԁz p",U#Ԝ7]vә,i5*)8R@ͫ\YU[f^EXY +: w @̤ 8]$D_kI'9*` _ \ƭ b i'KN~S>ϲ5?5-@][ MW-d0L[P:}(pR԰ªSwy.j PfƸ@w3?ݾڃ _;<Á xo*@s1nP\ `%]z!S].e Px R Dvi NO?N(@`;ua0+dW*ϕ+W@N}; [~& C,TB *!`'Uw=kTRX0 ԏ9%n,偲O"tUMI ܵ3k$}3(Ws e>ϙףE/*\|y&:pح'^ qr7 D@6Z^}olNb= o%:WóII2Q*P' d?@%ۢ᲏XB?nm.j0k( >S7x29|5^'~M|ִ`ˠ1lm7P-3_C8eBvy)ycrwmy͟=Ms.i?Y2LY&|3-<f+ͷÆ=׼=t6,R6?1AQ{Yi`L>xWĴLMܤneǍ%;Bz,bg7 :@PԄ8rWPy1=,^%% paKvC=UH7.3>ݚv SIW׏K[=k=\OSjv:-7@\+NHѷ0V0E-7^c ǙrZnP jA-7ܠrZnP jA-7ܠrZnP jA-7ܠrZnP jA-7ܠrZnP jA-7ܠrZnP jA-7km?%@\Omڣ9|W@<AnjfK8Ii|̟ǧhZvXZAUmTb:zfL Q"➴L*?,WSrRFormL9 9yBDr(WqɡP$ #9!e ɸ+_=PΙxB)"wݕjw\ꠤhvdoRrv߾kGO=J?Gg5@ܰG*bBgE< r.NųgE)& =ȊBi\ F޿|_ʹzج=VR1nnܥ]z&QD)VFE&ςjRVN_k2O"8_Jc+Fu*V%2+Tбcݚ$_g,'p?Ol";(lBVMV4f)MooV,L*EDctJϢg6TA{T<'z|Oߣ)L;-8 y[ M'qcAx_~ғ d&lQg?m;VV`F (Eh g3ufА*קX{̍ԵO}1RHpI;lLIP[9)SX[cqzs /g/ly7_ݕl">J^ R;- * J;.RKktLۉΩ߂B66=-tԢpk(v֦o;};j po/w:94'!<%Q>#aؤwӟ5WwoT~֚ku{ݹ оsP 亀i]ԦӶ? ~8v5gDpn.e6An}|,'2Man7#d<+mw0ɭm̻mgkpt3\_5`N"(RCw`MOomX|0EqW햗\yn{F{ jl i RL{6'Cs u}Sb'/ܻhs 疓0˺C|/\J6- /۳&ǻ/;2Id4a0 7J%-7"`&Ua֌KXl@&P|-؞S- ! &cR۔&> s b׷d]3q{8Xvgq,^ۑ&KJVmlHi BdtłɅ%/d8BUg .6agmP(c<ؙ~<#='H ggXa+6eJTt1Dneq)A+r0S-/N,,--bYBgf[$^խҾv[K;E]%fߚX6{ODd/H]`ٻF$W z]!1am4<%Z!),YEbnY̌<"u2LJDh|TDR6vۉ]Ǩ"~^0bgx 4Az䔹"PhnȊTxGDJNcR j2J4N+!H&Z^ʏł[-nb4Ŭ̺Q![\-bt"# p&jmb/)MDve dZ[VDƼGM0B{l$(#8!(`a2R X;$ zM#L83E ʭʘ#Z,N]@8VgɂgMVVC.$gnOz wpgqҤ|1^cW5pEE0!+Ta \k5/rX}|Ĉ}Vd'#; dGG0"r>DRP;7qy|7W  U85!ݿ}I֓˨sr#[] ">|ܥE M!DŴ}*+Egr7,?E!8/Τ7>w[>--s UyFO$wFMȏٓ>it" S/7_Onl 1R2'^f:T/vH'EKX~;PUK W!H*Y^,i} #F1~q6EݻdC$FY'Z+L,xzDÌGR"Q8d%8LOLV4Oe *c_7ݱKNp3׮..d ݴ E}%&6L/TiVo.vיoIBo~o?\a.a $$|>k_дiho4Ul1o.}].b;_E>\T⼫]_e.(oMoԏƆ>"$NV\,GF)=jd P  hmRx2*ImmI`?h_kx8'af"$=>7}(`(bsPCqrM55 ^5y};Yh)=+qw9?Q\t\Zw[r:Βv*ѤЀl.YP93ИSK`TTWaњ7ɣ$TQRvqQƝqy0J#ցOP+s(joFPQIpxaR[E*`8-RLp7\3諍-"pdž ~rjql Qre\ pQ΢,q.&4 R^oCS))s&N#"F)FsEhojR RYU4.٤,ȔRExlHEd&=j^5ܙy8`$ r+0WC`4a1 !\#ByB-5Faz"#n[VƠx$s m&:;rlq̧]$֑WyڱrmP&qx6>ږNWv2\^{[64GII:VT*sD9)kƤKf5JSI0Ţ(JL{drGV1.oP& +fZv]OQ׍2RkOwD{6|-'[tt$t8E;3 ⤊^9-r͙A6ƒEmhp$kC"j(}w)ʩki@D;D)q68&"tzP1G I0PcMgE V_"D4!Qm<VP ю0+Vp|̿ k }sr/G%K뢠` }}}|)&$l{U5@,y0 LJ=*(+ox2yyC*Wj_ o0J!`CT[ʅ AR".(:"8XLlQlВzOt:S{.d4G 119\8)C~~qՀKn(Uyta`4 #-< qR4^FK.i>ZQrq2 );3"R,]*rADHFƃ?29[z`Y{yݴ$E 3UUחeAFw |h\uVρ bxV`(Ct>-zJe /۵GJPr:kj7 g5/ ?_yaJn^`C}=G}fo {?S7n6v|İ N9θ뫝,ʼ~uoGIThnn~$<_7< +'55h6 ^r3ha9C3XE^$>E>{$ Yhcf=L_*.A&7>[oq5cE#':l+u=#z!Uxd!R&R/5eDDL ` Xy$J#:4P,0j ˇv3ȮNKؿ='Go^&2̒,kF ,Rg2D'(&k/%%X .XZYh67iodA;jKO10w],n}mkAͭ%kVN*%Qpe%_52f˘cͺ Zxe DnLKzݫm'm-nܵ8;^ߞȚv:d\U磕rMr+&VZpn!S1_ɳ0Ԁ |50uJ[gLR6s|ܶWesWxLJ=~t*^vJKu"tҢ]W9r=4v;7`k@bPԸ=q8d86ٙRFGl )C`S =ZL\mIj[;e{/\)$'][Wc 4\:u&{I#f <$bKgbΊΦRwg?f>Ptяn0z܏`NO`۟AxGa?roVR;ޏJ,L/ @Yj%pgŦ-wWx']LϾ*=A x@ ` IqhYWxu1|5sw>- ["iJd/7?Y͆k~O4]}6UG&(KQj2(bq<՘=GsRQVnsݧSJ WO@d8L{KAby 8 a^ :Х ^ z;ՈyS3c"Р4XH:YS"X:hreqc*h%{gqoR0#4'(xAc:wml0uLhp ]/qL~=tPVQ'Tc 3+ U (>T*Pmj_&*)m MREAן],gݡ*/|ͻ܉#9Dx)I Dgْ3m)1!#X(#1H&Z(pٻ6ndWXy:U;.uJ^9/'U⚢?E!)) vYg@NR'HFQ>S S@dp7\_,&a*75gm Y1n&9~:v~">~XK¾,}c=ZNȩ/}JJ>)ӫjfjfP}桠gU钣fjfjBC* R/gfjfjfjjY>ײdh.y[8QQ %,I^:B AeZorrnL[s8zKfwg[6زqݮ'=7,~Zlu7ީsǏĵ} XN"7A Ҁ#1B$UKZwCf)7]G"+G{P=5}Fzc.6P.:'/y ݷAR.nUB58g:c?LOv;|ƴ-}~lz@pPx[%]VI U!1t>4L(q&e(%*G9K cw{z[՜wY e]%CΏ,XU(^X(CU&Å]J,;y)DԳҰ68^ߞBm ezixcQ)iG.vypa!Ө0]dYhnqFiqcbȅ&a|,oqoyغpl؁tkef L,zmr1A[Ÿm3 ۄ#fQ))"8w[ͻx6JKu&:kW5mV+CQp8-8)}ryP?yQ"+̥NynS1%ʣ2F(Z%,ex> RZJyҊlZqr{aN*ڋƃndUBnOǮnڬmI֊(N^Wtp#K!? G WTKJ€bWS&^M54/ T2$, W!KO;Ĺ {~iV$4ᓼ(2E&RH xb'iy`NzdɱX7vGz Hlr>υV(4 <XC#w&CFK=4BhSy̻q@5bz?ıNeXAf<*gӶ{ZDZL%R'hPu_hM j޶M?3=Cogb=ثCcjc_fŨ-\ VpMXL@) &.YcaX2.OEkSںj`q"MC[ Bq(7<Ȩo lAWT&!N h!#2VQ V{u^.$G:*[X1r6֨GȊT4b1U#TX5ڪNj^3:aV[EdHQ$Ci+j*TR'HFѢhJ%Ze)( XzuF**hI3P* >Gֈ٬f:%_g1.Q/RZV/RRbՋ'vWzBQիX}e\Ey#q% +7g,(qtc3=oT <CtChqF[CWżwp1o;w3Z?5qZ#^|t`f#[q;m) ㎄dId3Ei@ nRD>yM֖-#Kñ55(c$|ޕwxphiǹ2D&ehvҦt1r[T]ם%,̔}K+v $ooWjrk:[A٦фU[Xa4qkMb@GGxB@`4(+L$dp]*D5㻡KK-M[ Albi+I6q":(nWA*+HlRH'x0b8K)?Ӈ㇟u/^&$B&_ x36Cs [ MYɧM6㊒1+tZy#f#݀JCO}ckz}M8CCDN V hiS,$j"LXyL@ T\1'AZe2 Cg)^.%!>.1[0*G ΌW+惒J oy\B ;Co߂92{d}SA1˲=@Vj˾1WT'< Zgj]5~:JMk7X'\;;&W/1cHE4_a%L" 5`lMMƄ6T|S2[@xz 5#uMA=!&ji.*`wIt@|2\:j2U;(^ +ٽ> .0*<DFy]BѡT¡l\IMnh\Զqh\u**'2h<^I^EsS)(:'lX7qaV ":q5uG}?nfE_—r [u. qE%?<:L2;yAL , Ӛ7`kuٻ6n$L& !k{T%WÝυ̘"ݺ3Ç(9"i]"4zn49ٲtvD |lH$ ;JE4 8OgDd.Ji $)mx>M(܀&'WstqQE>O}ŔPRj"XIJMj|*I@-ؓRDtI0)UbEէJV D ,>SH)?/b* s[2β z܄`]f媺78Ŝ9s˭#ᩒt;m#}_-EǸί'qUPDV j5EoPΗ\_l,lz48nV`i Jt.I GB%I8U֢U<0ٔq/LQJ&Kfpf,Kg<3?ٻ^e/L9a)$Ⱦ!K>0o35<{S%?F3[_-?W:ix#VW JHp&S\v283Q+D?=GK)kq&z-|7ކ4G].Dy q.̵!,ҨmMz{>)qBR#3H6* Bks45ARDJ\fp:p{Qf6'F6GvPV|kHm6q[$UXJޱ)2ze6f*gP Qr>V9t)fWoa' sq3,)bmVZҙ>@T#l ^t22MaeվCp;ST#,}o +mOtyzhu8} ݇ȌM>ˬL./-]Wے`bqln$_QhR'< @6I4$@? .ଥnF mpt%&> 4ߪ [*Є/cg-@rFϵ,wJÃ&fPu2 Ƙct<%d]2iD,@xpH2zp%77b|[<]yw튭~.y`T郸`Ŋ V|+P+`THD.ZZ& W-.Z+M6Est=¯Xux0SJ >D.uG }[ZBMO\)TU"wiP+;vs\Jթ+Փs$8‹LܥS;Aޯ4WW/WK~> { ꓜ T -Ł^dlꁛ}x9wx;$CK'ο<8h`D0E]~30j 8أe3vup0 e8̫(13_ pacO<ʥQ?֟ƯͽWB@Ae>1׏)0d*dڼ&zV1#:ZZ1v%u/Mćy>Hdȶ!M`ĴQ>brJL?Hb5&Vbk3mYx,~p)%IH5g|^8^gT9^dWË]\D:-F$)tB~>+]Zɏ=IT*չ'0&\0Ńab*L)sE)sH 1@LtJ>UUEV-JI Na$:3Bh=f>6A)c9[-A`X`T9F$㑅H>HԔ1т(H8F[.9ۋhȬqrHVX/F犈.&n,Cw+cn+r{jrz5_l`Dc$9ל1""DI稅g2p,Sne@1-Xhm 4wF-8yQ9b0XJBԀREL95x-8j5eNV*/<˧, m2T؏3MOz)%$$$rɜJ*y-@%FKly-?+BI|jBQq *NҜ1Lsc,G@u 3 2.xAF.i>ZQrq2 $Ť"fDXTJJ2/RVa`SwZ D}6ѧhnDhA6 t|[lS|Bqn-dzyK&i.}oܠK-]Wp󮇣i'pճDn$m2t4QL7wzM' / sh]?֛v&[:fnBZ整6v{hYiR͞n/Dmo;Yu*̵X8`Y6ќMtJxw6Rw{+T ZlSY&5\3g6=N/kߝfN1;|:\; ѫ'VN)Y)N'݅F^m,;i)FԃԶasn'pޜȖV6l\.AYcr+!pi!e֘~n, ptr`70fR$Td[ ;Lgtu"jx`YU 5șwm|*+eRIǾmG7.TZZm}튾*ػؼ4 t;Sޖ'-F^>U7M_x1HeC`JFm CRH]\Gfxf .77=՚us>lܹsk۽Qqg}_"l:z[u+BJzݥ?Q= 0`zx7-dKN8mfVH?&c]e,{e?MG[c"$`,B$fFVԢxΡ khX)Ql=N;[B7zf!% 3Bs4 b[#I6Ra, MG304ǛvM1dž̒%ޝ͠.)_1^}"*9 )ZRJPc{VO=Q]\ r8)t; )e*]Y Qg3ѭn'2>DbeP(kflLp @K#v) *jLO!aalE؄ A2 c*5 `/<w@.e9 ϲh@ܾf]6I\K+pSwmv 5pLzنXkӣweK[̰ei4]a*G tD$O_N8EK3ȵfV/L*#R"NE$ d Sfo*!ʓH&oqBOmKJ7'>ʧV&/B(dg%aXj.]ǹTp&jmb.)MDND'-M45R$Rym ̛`(IB%QFpCS!At@ݰ]Ee[B@&F{" tm7hIS̲D;et<ٰX{* ~ Q-_o0{2og9KE<<)@*d,rЛdꢠlTHrA8sGP?ޤ?Ӈ޼/>Xe\*\CGxo| _0nkho64UИ9MmNfܿ˥\H Q}ڈH=>(6"p__~vgz} ng$qpb靵)(S"xl C}rZD"T}RQF'mwHY8YBr(: ~zuq<]BlѲd/1"z k:œ&8G%v;!mog~Gеwo ۺwuwq܁ ez|X{t2HE" HA Q0oČX|J'-Er>`N(C 6hM 9YVP-5B(^=I4)%=#aAU]p~f}m9_T~K-Ն!>H[8I{[UHz0XtbJlMF5:y-JDy-.#˫  rQ5F g-]ho}s, )oe֛FFPKv4_?"v_? ߊ >ryqp7'5/xǿ>OZ+ȳ[fsɌP,YHpg87;g~ia=5I@nNE s[ˡMcw|7mnslXut?#xƫFjH\Լ%O鏞IcE8p7ΘEZ~Ne, Z_ĠopP9a(9ID}1lRlhՓ.O=zR;sBUwjaU0/I`̎7LH%T^=Iwwf~/]~FG:\؆f}H+zyxgg?7yهob6=ųx*T'7 =I{/1MUf}yxn;l;=f #0cZXּаࣃ$~8ϟy֥ ,&ᇨ2&\g]ou=^XptAۚ lW![L@*XKEeAK+U_'tTxŽzWW=z\:`6YΟ [޶nS#Q=}szħ?,B^P0],0$ԮXm !C1,9C:˩[}V #[@dtN删bN!F+<2'Pb*xҴ,İ}tE e Z9`s /?- N룈qEɶpS (YSw)0XXb`ZR㐰fVQ: ÄY,M<9۩TiN4Ժۼ݆\o1Y՝l>^7N%% doQC ԥHIJIBPƇh-9EurQ zj2ʮ PZ#c3q( bkcAFzm)89N~h/ a4:xv/ZjddH)M*"X{u e(+oKʰmx 6h7ElTaC Y T¦26>pHޔDE$#v3qv#v;婠vq*:jwmǪҐ%bV,cuX uސigf˚-xX!EBffVtRdk2d$I G*D,dChٍs~TTx*"ƈ;"vDܶ}4NN2 .Xar!F` f)&4F[kSD0`H. *[L1T XE.%%Z,`p8{$~$~ԑqq:,|Vq-.qq+0Kr62E6fy+J@㐍Gm}4.qq'[SCxn2356W; ~o7!,#!=z8!2p%V2?<<0h睝|>y6zZeEJDH. ѠF*=,|;n7ruaV yYۘ9e] 4%kL,Z{Dj';@:TuSֳʕ8/†ݎc|\$KӳI]nΤew7!xё#~vtq5ˇ) M([0y3HԀA!(Qg4ID"ϢD(}iJd_>F'@)"!}J)Z*-2%(!mҮm !)( aB/,A1:I ֶW.?iLݔ! %zbKzÔWli ǚ<ם/˂o]:FPP94'4F0u:9PR>X{Q@P RhrL$J8QiV 3 GQd *v؄2Bh3zIUV)6jח[g}}='ۊte=&bbh^Q9vjʱVRu O;Ka9vF* P*C%zDݘ[#=TS*C%zD=T}PoJP*C%zD=TJtَbjc zzy=I4hmը+HP$ȃ Ġ; } ky^hz6"ً:덞ӑ5zްvJu$Je}#> @/ӗUfP*CPɔKypAaF.@'S# z yIxa*eP BplyVtɀnbrRnKQF$V*%uA* \+F"pY$X"|c^3ql7! W?&_B<%[EͭCx ׳1=vv>\W -H(^# ^T^r¦"$̊jcɃ.loRy D6K `:XJ4gYo@'){Wwo+ަXVlWtHo[̏~>i`9:muP %Y5 B|7J.9_N-G-ǓUdsl %dIk%d2 [M6VDL)k~vs9˭WCwɶeKoǼhmA[n6?mnVZ6/hsqg ,X![-B ^PԀjnSKݜrw٭\F49l d=8@JU6VNS A"HĒ"CJmBwڇx=ߔ@J?XMXXd(mgW.oWU6_;|{_;$1?ǣ|GQ^HDɌ*@Z;LC$/%E&K6F"&, tD8"Ե\Ƨ {/B )%%:|Z&- 6bQZ*"C{t҃*Y94:Hֻ6FIBjL=*;ĠͮHLɧ{x/tgcO,QuW.`El4M&ɗ;Y{ibr$Y[{)AGHQ}O๜>=#[CT2ȄuZZt #'aʅ R4`G&SzHduv#pdxZT HڱH Zma2]*cA 'k}3uwߠݗ  HQk"p⳿R  x >;{0$ ='eHlY96!m&_4Z 6q-CTV0U8,<%W [wp=xPL'PxA $xt_jlML-Xe*u +xxL?.y7],eu%mh߽vڒ;t тu~T~s\ؾߝ|'|03VDV+^K`ߓGjID;Ol'^ӿ{ ^P9|b>1.XȺM*BT (bC9c"QejgD%R0ukzur>Liv}LX@r]Y*uuAwd?|5/VgL yc+z`(L==H\ < @U\^K*vCJiU!{[JO?sSۢkX ٻ6$WU&8llqrЯReqgHICƋUy뚞_{;O5pUPuI1SG+M#-<5'm5]EKɫsd&AWw c IXZ1{j;X8YR66Ms]}lFbzzG5ѸT. Oꀽ(?T]i-D "d̛2 R,$\U//ʮdwUܶu+zӈ͵k5y*?\xN1,dI`dTCP2n׉SYi*)E!l~FM WLB [sJPŦf8YwXuf%)swIYC F5b ΞwĈ-D k0NxL-"XME˕.ٜnuL'E{&(zQ u&sڄV'h1bw&Gø`\ jw6:M=k< 0xXdҐ:[`L$GUI٢$ S<%hFr$ !G$MX+ #ap>fY :iu5Ĺ~ bgc]D#G;cp2Pj4:qmCiMʩ# ɘFϭ6ũ51Uʉ$89D"S ,iy NW܏N}jO8'_ggdM\[\dC$e,\0o1١2HƣGlqq7<ѱ/x-@X323+wWT\pHD: (i|xϺ݀QlJ$([A8½\S-Di<:MC!1>-ctkEZa^QS^',fǼ ˺G >)U Bfr04\ԶR74E@M 6@&Vl,Į ~g<0s>f_JHrjZ'qXUzyvaE7qGra$;ZYn<];Cj#(cRE&+K,3Yǘ+ #JB/u:PDTh.FvbZ%P#64d6eFsI##EH dOȳ=VU6@aj7c~rRVke}W0@!y)Hi#cR]zŐ%L; 0SSMMR$aNpLS6*R$ha*U%"uHP!ʉ$ ZD292ʈ&t }YtAX-B0]Bʇ/j WGߏE?i0 nPGV~8krҥBz1YW5pTf_xWa/!USVjx #,V6 ^ `eHפn()FVLҹ\L.sŒCU]`,yf6iJōWogqOJË">+G=~HWG׀HsE^>ADC4($E,Tƒ&gϿjepqW6N_Cy$U\8KK@<}_7gP"fK?ᯱM[*Mvu6T\*w[gt>@7߼|߾;7~M\@S $f mWMS{˷vCڕ]vq]H籁#ޗGVf* W-+q_ mMid4ͷC_$ ŋu m#r$VM y"4Ĕ'j]!3IOmKݨ__qf|)ϘBƔadMao3O BF"N':{۽1VN[+)ȺK;{m2K%_Mv;K~z;h2NY֘L.:5xaGc΢B (g/t1e05&,n%kx򘈾3c)¢F赈ۄmtTWGC,0E@A%E4ҥx{?Ƥ3qdj^} ՞ ȗry&e PZIsa.I1bR&چLXdl,HF}NAzMM A1<2 TLR\tJ aXLlczUe$S!1'K2n '7Bm&!SdF("HevhO)zq ufhEjY FgUM1d]A ޸ĸY~Gңӑ,HIGҝI9`Kܻf: }4ĸ|(୍H C6*ZI)&2t Uq7B*9eAT{~}ʖLjt]W&O4#[Tͻ{o6W}60},/Fq{6 Љ>~ J7_ C&TɶgZaZ֘)d4H35-SSM"FzP$~P'.Suk ^z[g!KnՒb]gOy\"+G CtRuxx1&霹 Ѡe\Ihֺ\Ьo]߸g;ߋ qHm<(NbBݶH0ToGTsWn!9.yNi~Ia4 ?SkFCUvCP[7yw+Џ|4H``l4"s|9NbˣaF6rq[ h$S H ͒ѳf!Μww^WSnDVD Mxh4GiқlpFv[:w=i(0E:~j5wn&ת} }٪Edr<{D To~j׉.$U&ث۱wjt9ZVJ2) W)i\ZgWE\|6pUm;3\Y3=\}1pp; l͎QNn'Up8)A\# zZ+m`.:AũPߎ7 ocΓLZ1Yv!0ѿ_|UGz4G֜GRa D{}Mey-<.Қ"5=L@Z\ `lF\H+UR^ \M3H`-ij+p|.pEZWxW/4~${߰v1>;SK7(uͩw'Wdg-!`|w5 ?FoT[VPxT~E*kfuuk)쒱֯h_ B)߄|.Iu|Pe^&IT9cva)&h1mq-Wm-q^Zߤ¶"wD9%KdqW(Ȥi&n7!Fǽ٢}7K 0A+峔Qz,\Ulcܿ6 }!n 8 \6[޻crg&7؎tlnqɱ+ pv >;:xQK8(Qa1TkHY^hJ駦D(QIdh@@Q,Uƒs;3!muSJRr ]Rd|bY! X 7!/4RN=ҙ8S(OhJ+_'42?ޓ0;8ք V,( iYI>;>W;>q89l>HIk:(rYzqAU'gXJ)"tT\z0kk"̝[qwUFQEQEX c Ee2O)$Q܃T!J "dgO\I8G*%U]fn:n7.MQtI( .G::̓6a#>% ŷ++vEkf~N-zk|(r`9|:wIv|ƚJ Qn\rƘ+p^KsrO^IKc5\)9M()$KB ""MB@MD Zk`jx^ᢓ(GƔSKk|.Y6*lL/-/ lr4ߌ^ QwQStԶ#~t ױb7WisT>a zeFc4f~ݹQ"J]FtЁp*R}XS~o5W&Ś_}nj3 5k%0. M rȒm1B5u*P r jTt1S(:[Aaʣ (8MeIR͖s#uC ~??>@J?"Fs5nzp*d{J=CK>^u]XI&33h Q mPhIF钍2E=",/Y!kI=K lJXJs 'a4dD!_&B);9<\lP)3tuenKAm5ŒDZec} %yoCH4QOOfW#GCtB! C&+#X`Rؠs!%HRP!G&SzH*O⍑kA$"J|kVDTJk1d$TdBΆF8t7z[1آġ})Rڲ \E$F|W^q 0n" <T yl2Y;6!m&_ F 6dفV0ϵX=.KuJ`#+H%?wv4MҲ< 7DP?8awR՞gˁG^2aeJ|8 f=ɇQ,# 1YT0 &WiiK7H]D ˗έ2}lJԃ^Ыe7/߃ f4:ոד ^tɺ\pa^|})oOTC^xXNJ5>OQ`π~9&B)t?̟ R(1aWKt zh'P*c$ԧr)\cJr'Q#ʋVL)PK!#VJy0K0 %(sə$4'cQi&^嘄Q2Ҙ6[!as\q7+q?w,rA66(|_%H+& CAUkPGkr[[O?졒 az~>ٌՃ?ܗw~vy8ǿ@FףhsaO“~ػUo|a5Yj{F8Unոek`a1ΣToDÇt uxheM2e(RuA#)r紳$P""ٹ'SOzGR6%+YkM,LN)m%6 6/q,qAZ $;e\v^U_|n'ήbkr1{2[aRr4"KEbiPdιV)ёYRlfEp2aS N^†XH^J>@#oRpF^qIBAZI+UI"4fٯǵ|pW[lWmE1vEoYeiGt?z(~}gl`WR1l*vN뀂VC)d,| aO`@(%1g61"ɠCql7ل` X;!SaKnKXE}IcgS1}m^7U׳6b5xH.9ЬU0%T)[̀)X4QrMw>nEToy̠O }>z 7oCoĘqI{*50ntDNk;; JN Lj@zVֿ~W'O-v}nypögZoĴiNCZ,~zEsΙP]klT # $|M]QF*Z},`{ V6)c@U$} _`.E(:;B;zR}3#U∿IezлmRNRw.i^4Du*Ɔ,>{vȗ/t!/# 5h3fXFIv2tZTT -ϙ4,~BI3V'4󢵴t F/R2E˸$"UZ#,'D,j:bpIO#Hs 1z @Rsk>@:xFCG-}BQ%/p TQdA`,kF):-R"""FH .Z-J8Dր4ld)SPI&Jb;Mfp.5VfٯtYͨnV4MjC0H7֬k39yiWl\_~:K?<%~ow8)v4/A;]MQyk,RyP9ʜC)i2{4TLxSR6.zR20 M-9k>\.Z*ʁ+u@( Rr#c; IK5Bc[Qn+38kj\}4-3ßŀ. C2beV"JtۜY{.}p */-mŪ.L{!+cTZUhBDVl vx,g}1b7[~ĎQ01OIǩQg$>VY-)jc8VgE{ X㑬Is>6[6l5bІ23C lE'M& WKđ1?1Eblh]rܨ_%#U`<Dl&/ED}="SLuD VN~P@ +0Y* tWm'Z8 ɀ]VlA31*SAi5ؒV`uRm]rh"~ԑqqjzJfRB\-.Jb;XlHGiCqM(JDΕ耍Gc}=x \<J:NS |qEŢ >XپJ5_E/:W}v#t3ʐUa{B8!_Id{*](ax?ag~F(,[2ૣ%IV$'_MHUTWCpM픱sZ*%:: 5ֆ02iɭ\}m@ԕ RG+ʶYJBѩ>Cp|y<.xˇzCLnQ %0AL!OBcJ_!A_PWU}jIY$e%u-I9&ygXvR{'ǹ@4Eϼd<ڷVCq^((ZzcE)Ɠ1# s&?hz ERv#r>#Q`Yr(טV7zEjH/qj> ^&RMP1A}mCRުDkנw*cgEC^*ӛ߾;wT+v w }M^kzեz1}Ư/5W~͗Ϯ(󓼦?=8cϽsyoüi=1$kZSx\:6cJZq~iQkyNkޟOg _i՟`*QT>urA#y_s_Uh4˗_?l;t7]\M|m|sQ# 4l(0=y:hj'V\3wb}kωY3~螖ë:w/vvMbb,Z/k>j?톓l4l^U7i7Ia<̜6,Cӳ4x)vK}_E]lٖdYl>S)8{\z~0*LugMwŷ>h#I;lN1}%3fN@lH(bHu9to$=i]׎<''YTLE!l:ys`($ۛYs FKRɥ΄.?'W5'nuN)Cv9rG嵎L]Vi{snɚ7hrxHÐR*G%dzf^!Mr&(əhy8+r*w.񔣇LL߅*w&he. Y';'Q| *ee4ɇlRQ{*TCev:PmVldn^φ ~s0 *R:gR9@KXTP|*́ea'/Jҽ.5x4{&L Y 8cT:Y;%2V0GUN Hm]LJHL*褭Y]?HY8YBr(: h)zu~<]B hY]4Nka`5kQSJ{;"mo@,VSՇAqcR=3U ^{ՏZg3UZ%Lrig^OZ=j)}FPٔʪҢ;vR++vϩC+WI\EXJ|WQoWIije?^Ogt2}^i>s:"ĖXw??30# ,h<`)o˴`?`Z48?+׻;Rz\,9iNJ;4iUX_l;OlyOXmͳ1fx-PV_m>fCm҃7H|oT.)_!! yI@"YSg#\V9a*d#L2D׺>4gۍ(gPvUU-Zi,O*5K?hܮR[d@ n~_ Mln~}TWWe6+>|k^p^]7Nލ cqa+ѫ":5.+ѩ;V[iعɱJ7؜R} +L7#W'}/g ȔR2L\ʝSvH#rL" QGoOn(4d/lR6k `!g<+u$1^ Xk%'%VxRZl[)XcU6K(-T)W(Md )8E0I7m&ΖD1bф -!dwᗹw=lͶ Znì|{it2z 5]Rt2h5%K  䜅MIE̒d3+˘ɴMA6Ă7) N_"y% VJ T%ֵsY/@/˲.g_8dp);‡mY%vEoY>eFY޲ּ<}E0muPu%Y!B6J.9_{ZrV%\:%$Y@ &\ьPVDL)DhWȘs 1z@$Em&-U:i;8P]4-km'Y[xJ=]wnWZȍM#Mrt]^?MrW:nqknzs:G{:jsOUa6rˎ[vպ}^SuZn^[͇:cfͳ]kWWxn\oXn; mbY-nyLӳvoa{l:LM%˭͟67ƒ}F!QU`MHTs 5xCK}H ]I!4?J-uJ[ׯ68XF$J]Ztء٥?X|^ xXCh>w b6H(ZeM$D,YpX-2(-dstL';_097z<\|pq8ubfgk{m+FgQ^HDɌ*@Z;L66HcII1="Z/ck/B+%%:|Z&- 6bQZ*"C!URל Y")'%K@I6f csN 3ainuoS_ޮaP/k3Z) 9 (SHnb7-yH%g.vڒ%:Rhz~+VZ'7m|/8O3`r!ge։6V8] _[֫Kԃ^Ы+0nBDnW~q#X^;^ԟΧפnE!/kDLhUl{B,^ o"ր)0X6s&gp6~EӚƭ`ZJíJu뭊i|b%r,4EkU ңDr6OI LEC ԥHIJIB`"q\b< \HJ́+l1LWiVƮXcW>+3^s-?p|ݓ4/tllZjddH)M*"XJmά.}*&@P6Q9UP5BB(Ml϶m'/=7H5Ff܏q;մ~Dx#dxLgVɎAF&`xl,Qe Rl(.kdHq4x썧G[cC a a>kUn\DޏRZ>ZgUݬ~!;YeM#Ι]2wzv4S*>yuX{7ZX.d*2ZRٻ֟6%rjp$&ΌFyo&\U~ 7`C' Iu<ίԃWK<J듲BILC wzDyh0+2F"Npolq9IĤհ-C4o55|k<3A!sƐN(ak/aUjV/oxk=@BVLlOrMTP/XX#wHGdD1&0]\qA8@sk.{1DkQ0 *UbZKЀHDY,2X)#$%rw' ˏՂxKum ̪YY*7zV<)Hi#!Ѩfek"%D;.gYë^hښb$f0[DI"䔍 (''i53NaU+AqFCH2C q5¾1 }?]ԏ[|n/fKJ)1g{zl휑{^7n BVdʖ\٪ff4+_,ic+#x׽ pf3s2r{NjuU_%jC<H%,?}1LE?(6>-ҏS;a*r^|纚>Up4dI!$U'0FY? }o0/`_A$tc،7ݯh647iFӔ5Ӯl+vֳb%B 1éY M2ᅰR +:RLoylxl= kp15>9P*Irjm/߀10e)x4DR$Kϙ+%Lÿ ʶsVÕ-F{|TB0)Mhdxb ,겑YHЅ3XL{eA&MŢ'UAvNn/={3H?ӥq3 IY%LJ-8p0.L_B V[my%.#T Ȋ<]W- pغ$pۏ v8\e{8tJO=ay3{SUNPɆT}Ty*w~bMLnQ͂wFް@zK bg Y XUb0%ƍqSo fw҃ScLVQ=ZքcJhI9pV110!tkdCҽ'-]Hӷ=9&YfY;5R"zW͠%6h>i_[1^QJbbRo<8zԍ~>Mls-ŧ#ȡ4<zL{~Q, \IAxI̪^i_u__uF Z諭'f4Ctz6]_siݡm ͶU I14;i58ki"!myPI`2QK-LbhLgZPͅ󀫙.r $, I p&m1OL<a U[k4~bb+Y=z].8,в҃?L_.x7yhBgz~47q=~W l(<& kڡI.+'J4x:9^f͝8epm\""0,B ºD9UI%a9.xyAb-`+7=:P.uG|¸A)=y$Z:*1;JWv  SA"ԐH?$V%K{p΂O!bls&h4#Vl .%eKu)aE͂Y !#z`Bb5aZisӮs7$. =0.r+|tw[0tJh֧EA7<aD 8A5!|g,i8$c*3J)pU 2)P12z=^dn=|^r2zɆ2zm)7hӚѨHtT&cB)E=*DI- l.$6 ' *(6,V_ sFumk >ZLS)B!rw⏐\%w%f㏐JcWĵlt{0s4Ѡs WC@Us;>|- e;$?`%O W3+E*\~VQ~b5TkvH\=Br=~oB* i+WJKNvI;E6%\wE\!vc5+ l,+$W2j 42ŕ1T C #\C쮈+vs!mW),l|R%ZYΈ+ >6^iN\Y#zPIƴW.S/1)jdR/Ţx)Pݜ'IER3{H~Bx~~~GF 3~G  O9'whK@>'Kkdw/j|&Py/*3%=lCZ!qW@&rWR˶^\!ܶ+L+ffgȮ+tQiZt7Νw'8 Zgo,`nVaQ1c8+g}9?./͹>%R('JXhfZ}OXt㳎fI6Sc 8l808h T͹럮z~|:0Z87z"rpq{!7{Gr~Qtn  goiJu T B;J`C΀$]%H-JJPIVbm6jnMڦ[mӭDByτ@GSt-$jӭVtm6jnu@. m6jnMڦ[mӭVtmb61V=umbIabR) 8EhpmF-i,RpK(Adp/wR/;n): -^kn Yn]mt֭T4W6\8)kٹѥ~}=oL+S&pK.Q$fjm#YTk{{, Ut"-FL9R0͈&V{ɹ$F-z Q)ML Eʜ9CN&+;CTw^a,5Xf vۥOʹg 9]T9}co%}4|D'E 10#ʣRVF ^i6(Pф(! "^mcs$v}тsimԆp)  ~輏LvB," f q朴܁K9s&K!1eHC@9m6%<%I>~<>/mjq=b Fg =eq' jBZYlȑ%/xR%蜲V Ψ@svg<:iy7VDN%rH`u)#g@i!O+Sueߟw;qsEXg|זxКs`2 #lrR| (fYu\/ 70mѢgw?0tQ_% ]|v'%y / 2/^,}*cbq5wu&W7U!sa'DmA$= l#1VYݺ0|lMS5<0I@ytEβCxԢ3jv9Om2 *Z23Nц3rk\˃.IrC ) FFiìg沷9tRhJ6ȴFuƊV#sqߟsMK7L-ܒ?9|`4zgF2b0Qs!$#:%E"&%3Nd1'ze|ޟѯ5lk{ml] sqY%!L[ 05E 2킊(XA"xyŊZ yB>~)-*QT3& d|+98c럏O#j|iV5FӓUG?.} :~Is' ģN+ !v~]#ءq$gIR%2E*+>Ȥu@}r,hND%$F:.*o *PɻJr YNqIܲ@9xW_]\˕mrY}ҤN`b |E2<@E̲`\'!dɾ7yh登gqD+^}U0$r1OW ]b:02aC#m I춝eH=={%.$^ 1Kk(+(yR5i/ cJd۽rs4q@vU~VP~Jy2Rt fcˌ93*V 8YO} w@}tw@X2}sS9vZE݀] I.e v2ӐV$(>;-UP68I"&C(GtؔpfRv',A Rm#ck܏*aak;c_,-c~ "78w?Y͓_7pe8x|c##Jh-QeEB19pH)זk=v+2?X=8!p l mAdK&nǬ\:NU)2bFƣb j;mڝmLJҐ8b2`)KdLRy4+ytHksTU IC&$iQ!IJ7QHEJ,RƷ]5rJ0`<Dlm싈eDt"vթ&h+JN ԟްcfR9Y{ xYfgd͛,0i\ШˀHF\p&mT46FҤ4u;մE#R<:l=q]\b[%M?GkB"^ `|\F `Km%)ʸv!phĶvDZ!쇇+~UemNpx?jR`2ZgNTCHދ_1'U>:c,[ /_u/J~:TbX0`PgAG%nXj'^z0\e9Z_޶j`:ؖ+SYs44qR zPuHAsNm[+Z#gG>?GcE( bV)I|mWқ"), XxdbjX[=k:|]b!l !H/T*|ĔgOee6)%}ZpϑHBuԈķd ,th(4$hxrNHz!@OZFq_ˏՆ*8Z= Ĭ橙ue k!)10Q)SU >;4t.Y}fƻn?ތbw4=Ѵ#q[TgQ&^2zEҚpu[+5/.u v.#s$D&!RI%iΘ*$W ~&3.y߻Nֽy_+SݯՃWWg^6c b.Q?s+rnWDxuk~di$ #Iyax0R*Y,i> S|7.\ 8*gGdӨM彩KRHU.ϏI,]_O7νpq}ӕuDA\Qխ{?_j~Lcq?_WwUIi\R+:^m-L{qEE?RWմS~~܋ˏ$"oߔ߽ϟë7>^޾sO.W:i A1x0 > ?Юi[&|qU0㖤qE "fYڦY>2 꿥5V4u3WeP`R A"ho-ωj$ E!^&%fKf:%鱭 stW1׎< -N4 UBt.$O#ork,1)-^Ƙt:\U^;&_W}vrqwr\չ[>%_ZYO>oɣzH}#} &pHvMy6ddmdSx帣ȲK^#Jtt>io0.H1${BT JxcZ}9;XWU]#P4@&18٨PH崌8[Qɜ"+.fy&ErTԢ]h:j"P:Cf>Z EgL&eKl> b:"T},6>bx8&US-ʽXx;H l!vaU/HCze)\] RaI:;"`}I~Okϸ̎ƿ ԙ(5usE9iB @J зL۾D/Ap$ʮsV˕ S߬:+$SI}@Rh*ς)FZPLUY*҅$1'x|e 'c {sC}*{ww")DW!\  :*Zs,pE}LW\}peۡәxg޻rIQKX Cas?/狊u/J{EmJ"GU虨,sX!+n4Y%-$vNܢ@Y0:]矿gsKlڳt;@MMaf@eOn)i9ϳNDȃw2(@I_#Y JIj9/nT;YQ~gtDzG7]A1fdһ+!Y +l6 uͺf]aYWج+l6L=ow:+l6 uͺf]aii +pf uͺf]aYWج+l-Q=PHhhrs=&!J:-y|:rqnPQ3֤~_T h u.bUV<]2vO%Unc- ʳ|0E:Ҧ9Q-| W[^K4Py!xQ2<*Ȝa°lhS#:%fR༷gcepVz=W-&Ά'? U27X˯5`?},r<8) Ĕ2IF'K0!Y[ϼ"]TCPRdCԐfVyPdن$ 19 $d$ (/# 3eCmM 5}*nT5&/:+qI@>Ix/'wbl]IV uyUw=ojo9U̝trZ;~%lxkOgVi?qlN^mO*Ξ 6AW5Qg혱MhU)Mˈݚ8O# ㊋y(ݚv j;b9bZȤ!se2 V%-Cggp>fY :iZyk~ bk-"ږv!co2Du94S1'aIʩUbdVˡUDS.$QAD<S- nˈؚ8hH4ԞpqR͍biɖ..pG'}"_>Al\Y ǼQd 9R[}ĶP=@2+l&7i'G ~u1GY̽GW7_"bH۳4:g{$.Y 0bFi|zjkң"ʔbMIz$YP\К3<-cJ IFUoMXS~w%-Gssl!-x$) ³*YVEbťIYTR8tr0@?i3måkE^ ol@O޸BȤbj}"Nl&ΆٚK=?%nUke D*ZεiX^\~67QJx*c .nk|M+edK"E&+ ,3Yǘc >qR?asDI]0{J7t4^zYVk-EM6$(5@E$GO~A\{S^bӇduW0_ #h+uH1!j'mbѻlW i^2:̴ܺ饦 I $.9e"3I]$ ^r~Hu tԮ!m` l`A"Zd2ʈ&tBhs9!Xd(X/ŋf KTʅiٯeTqJ?pB**^Yï{ni$?i%p&t7')Ij~mQ,9$%PcɻT<ǴIERxCgK J qgeu?ޕA?͟Wt#R %}(OI.jr{]+u"@6N\~fOBY>e*JQKRY|4Y^_]@۟~yW~wρ{{74eB) dMohڶ4hZܠimz7iWv~]XCBd#^̮|7tM<loIA₎{ 1+TB[=hLZV^*r*reͲ05&,n%xBLDߙ":5Ey`MX!0r^r m=ZXa*̠<Rjgך8-WWA?hWN=d^1 Va@gKsEmāy&rbC YXJ9xZ9M*&Cv>)p. t,aF5 ~HBbNjBIy dV#Sd,"-OeA;oe qrk:<\֨\Xk P UcQRlllf{[X؃k2 >ԅ_)hA3TzφJOơɄ2cULV>WyLZuxՕ4S,p瞵r 1:2kἈeMm\b Z):`K @[}_)8O3t@M>RэI5/:(e{=yBRVO_n;v:H2&5?;_ML*G8ʋ(Qa*tWy^L{E^*v\DKQ"ҔH%*A_%Rxm^$m`BS%n2Ęd\萶:v) !]֥Ȉ6#Y!Y_@M$?\+D-SyGCD[lң&WӧKl=±&/,8w%,h$QW>l8RB 8қ+YUa*%+@ӧeeRΞ}';l7y@n a 0KxŢ%*S ɧlx!URKI2ɳX6t,ٻ涍,WX4}ݪrm&Hl5I>RiѤ8S6DT9NE D.hms_Χ)gWq.úk{H>zy$Ibځ ɏo?^M5JXtb F )BbaFdJZMƾb|= ΝXv^uG @)3z!^ۋRɭ&:}ōDh,Zޠǧwuϭv؝wmvY Ė\Jj/v[>l[xFQrxN&[-n]s1ߪlT%Më1{-NwCsmq7],<}nhG~RZnm~$ۜISe+me͝kEⳁ(GO!x{9pQE󟛼9aW BL~GaD\&8#=}ƽe>cVE qL \<+͎5u614G/.+rZC6u\%M3nn-7:[Ykwvj[weA%'xC-rSxt<0\^$6o8_'wL'ÜoJv8%A(/_`cB1"(A + DS+uBW(OugvάpN,H£;˹T9]R͓TZ)$t#rPTH{N@mbAƩL@h5\tڙ:-7R߾ \̏5l~ *E 7ÜD"9%鹸qƣ3V,Ўc 3 GgT)ZdQ:^9#һ@pDcƧ  AdL}2+(Z{˕is Cyc'v xbW10A)FNNF@i xPM9˚ԃ,;0W7H`TIJ  *7 !]ޥ#6.<ÇؤvG{)xC?S* 6!Zoqx*.o ^)u:Lc # huAo~o멶ٲX:,oKAZ 6}(`=Iؑ2@p!c :+BG=Ki^ݾ,"UV]ceHQKrJhDy*'_DrDɖ|s耪CW<rˠG.X).\)lJ1,:/ш\6qeXb> RN@/@Y6lMڦ2iV  ھ1@ȺG~;TX><r&hRbΊ$,b)3'I4 HBI[,5ZX 񠣑 e"XB--Jb,uLq;S{/}fAVg}cv>d?iHlsK 3}AZvҺN`I$GJd+vj]{QV6 ٘@< B9sxMEf:e3 1kd4 62nu4[mMrh%NтR /I<=70sv󫯏f*lWq,Ko@'gSՆK}}9$Zjs(ΩagW0l*\*kkjIW4W3QgWgWY[NBp "8# kq6p}Vc{/Bmb WYK*zz1pZ~z&SGG)ͱ.' WҖJ8\=JK.Nz\饑$+\,[\?rQ|.߮- g-(q)%a%矾f0̫{sgyd] \rFJ4fy2F Lùt֖]y}+ `.ߜ?~M&7wP!oS|޽Lye~ 0bs[yY]J;zNkyfa@Rʗ,37 с Uʂov!χS42 wŖg 6M?/S%sk({c2zT+w(vܫ/wc)+"08 XPxI|T}r}R|SWo}!Xp")f \FCu:pOWR$\yȘ6F,ɑi RzzR'9sr] bէE>ln>o|JLնsqc۶ŶGƶ= f4 oy $_ࣱ%Bh7Z[ g-H;㦧DP"\H=7%RFLT \3Ҍ:"А>XgL\BRCʠy;DL 뙱1a.CIP$9`> P%aZGP񕶾x]k!ߓ; ڼHG*a 9RT!NTL¯TX[ja<|ӋqNw`YNsYDIh㯊R[fa)Gڐ?.4A I 6hy}t1i `h(`ё-sډъ]! b-{"/m>'{$x8hGo"¬F){٧PS)} >zBOާPS){q $ ̲ZoxXNqwuk]i;NfY6˛}"+23AUͼإJnUm6q+nl&BEgz?>e l]n]Tvkݒ ,b[hm/v[>l[xiBJ!Wo,߹͘GKc_ke[t@_{o ]CsɶEn-mv7j4m.lwvС4W,6?mN^M֑m&1ǾzJ-uy.'D󸴔p9<~Y]fՎ1p|nh'ptAr$QHƝ\* m-@9OJA+*MZ+hB29>JbHR_LҏBnB?J $PTڲ.]|Y/PY}X}jez5t5^Xz8vKͻ9E@V,=ҲJ.uNwqV_~u*&{7vMZ7\Y9\ڕE߫nm&Ijɻ:9~eYӅjy0լI#:Q$nȻ{;uաuMѾ6 V?K?졟|2so瓺vKZnmt6ikN>2m-ͶmV=ޢ <6?scxN.{mm[ڦ4U΁d\Ӭ#̽HkL=--pfV-,r.n 2T@|beksو uSK#.cS;͑:yxxճ9tTKu5 cѽ[>Xo^nBMCMm[xw4.H,՝}JoږS۝EU[Ju'4١M&z)d6+?RƤ[̾z\ (׸\ y2 ntCg:r f ">7C.p}px5ˎ#e$j/8A5R~w`'Ԅ2]Ꮁ#tq{"6.x.`gL#\NyW3{B6YґܹlɼSv|m6Qbߎ?t;~V[^]^x?}"O3Gjhs*Q{-e-&F:əH,X!bйd J-m"s_h!_\߀9~p<||bShׁR|PQRRmt -O<@2%zWDe 骚گl1؁%w`#X? :ASld- O^W} / t}WJ2xZ٣BVpo7@ ͩ**P+XPII }#e0}xc@aN*P!Ľ %X,p^D2sX$$! T(.gc6Rcx G* ԀUzetǜ"@r)+x* q%ʪ_Af Ƴ~{MIk l YڞZnк `Ju-ALW_ "jPq E~: dzlr; eƨ7?I1TZب"Twl|3b؊pTړv`O>1`7>U`7`2 c3Ӧ;2?83v9eZl<&7V: [ivޭ3%I:KCw5F_OP5ZaEQͧ7TX™4x_%j)`r@L8KKH-=9G\K0u! Z!ȱ3ݕ0=;Ǎ4ᣱM˾q&1 B2*Z驎7xj̦C.׏m {?AB-bwxhf:;/EIUTǷ"QݒW;<4c&KWwH}܊W3S$n7M^foyl|:OΞ]S\F&ǥ>1[IHQ^C$-~̘PpB V{-"퍯Avb1ZhI/~(v|o"+θy)ZV{%MJ8J֥7n8Gi/7(>mon޶PT8MŶGZZ i&#X,奪OB uUuHG+"U"[ymd>5u}UsW!} XZI\VJD{E锨G#,Jrks=:k}q_Zn=awWg}}j?ϛt-dSw׶:={%'$7ծoGxU@ 7K4,0؝*&V' LWEQ9Ycm펔EWvE}'=LId{>(͕%1֨$Y̕F⌥ >ZQ6TW8%3xC-@rR!Db:&u;˝^iTbt ةOv𫦖ן>~]($2.XpW"IX68'^!/*/B@,qV Xb9+LgI@x8RlU^FT.Ą‚Z uijnOK1>9*\E5Zµ2ӷ2)&[+̖ X';ą[jʬ`BDOcTV'F$"IK  4A+)}+Sw1YϢhRNCTTqG.*!H1RLCu) u ^zU;x~q/asO4 m1y.Td\>QFQ$'h?+ΩP&W-Ӳ'ħ1詠^`e@rHG#Iĉc R)vyiLkSTZ:iCdS׵Nޟ${<6%pEM_[X;'J:LPm HFv%k"ץ\ lWZE}녾DсS6*N"ZEqJYō^ ! D9 GLJ2 U$>Z\.0YFxi?ʤ~E]w52n\.j)%~wzuKbȾʲ\"ws9_޻qę~V~j/>ZYg=:Qlv8[d:4r~甇s7ݟadŽo?{OƱ_K2b՗c';q ͗>-y($[Y3D--FP* U)ʭ'$:Q\q3\npz[nﻙpѧT^,:o8C(]C)lD'*5^NO;Ug20irwz Aο6\'mWCC?q̙5Ҍѫ֘fO9ĈV_;o gdl~ S[ns$*iU o18| lVH&Hm`V0V0rBpT'Qa] 3T[=ް]% 7$o^}ïoޜw)ݳu߾u˴osA(bg6MFxs\)krԳ hr[~W(t? "oLzd}Pm@%4Cm17y'nuo^5y5u~NЄ(.U;'3J5KjɀFt,G HѴHh/mk1v'8^-:S!Fc\h0&J#8uġb@lLX2Ger-6fl<.A/޶_ae JE7OfNȖbpQ%8d ?3]E‹uzR~募[;9N8=.F㛽O#?U+.??z?l>,9Le(j%g"x E]^։H/*I/%mD-袭4!$Gol^I^L˷8[nE˱ 3miS}zetOgw~E愚-8CCx$X N2#O (DqŜi80JJ:#q:  Z65fѠQo4I|&w1)B (9V $h2>ΒxCm<$ݻ&H+)#)+^FjgƈpQщ&Rrk+MBs";ڿB]kA :YC9T̪^".YNzɰl*h %|\k.c#(R%G4ZA)\&*ZcrcZ~O kEUlA^jV:g289U~'ŋZ F7FuxRW=LV,Q |q<~Do'c˺WVyGmPrJcd*DYBEp Dr sG}uUc%%/?^ɰ]߉"rEx?) Cث(TZTktYKL{!}t1,%'' KڷB܄]P;; :6$WHœoW:W]-I `&Ph5K"A-kj?\1 RK3[JJߢMO$Gmh?^{{| "(Uʋ,qp@pH L4# F65[E?qC 40#[ -٠g"cä&rT@ dZ-j #nǣc*a៛Y͍jΗr-˸yx+!/W*.1[AdFCJQ3c2kt魒A;HᝅN}g6ʽ}h߭2h,oY=7ٷmiߊp4wA`ܗOQf~MgQDpFi}^u8-?ÚHaN2sW5862/6TnϚ.eQ72ǔEmݧkگÎx#U=< 1PKwQ߼3& OZ# |ٟ[NզÝuz!4Z],riO2)>X<)o$WZ]YF6 ? G[f7s>Gۛr o̟|ԁf`G?n9"e_+y'50W[ŶRFS͚65䀦1cƈdԏsx8_5(pɃ0AoL)B\^F 8 gDnٓ L~Vf][ `ۈnmn 2I4]\3BDN HU[|͟6Bꊗzu]0"F%Kr-J2I=K6jcZe*Ш92T#*SA*S_ET\N)DH8 sM V&\sR\0lMQ4NScp@ :g(@,AC'c*"f}aL=9R5AЖ#QiN,D`))㈓ȋҁ%|(r*Y>fHBS(c T:PyXP\Y؏hc|ѫVIc[ 㛪8{tG9koI{0k j 0hҀ7s"i0R"H)1JYB;hF5*#9-S'tRB2N AR-lB%մfl Z5f4 Bݰ.T.<.\`6Ζf|pfɯ(m?M\c &rJ2rHR;6}TڣʤǥeGkt Lk sg#9YV\*/IvDD02Wa:kl7Xvcq,Z[Z`xExDž$Q $n3Dy@aQ _3\0 wZP@-C E{,2%1w$օ$TGem1tևS??Nh4M[x'5jq="2@)IZ 2:A2 5FSB$¥2 a)ʃqy-I{(nֈÙˈv'L)d={< [W+(mCLw$$K\]Mvi],UL$Zv%j+FGмA> .7 AG:`pmriGB W+߿R731)%xt]rqrHEAWMO~XQ X71wTGiE=/iR)N1#o=Y z{(  궱 r)ӓ2 dFc?}=(Ymt -FƏЩ2{~r~Qj^&uSa.A?P+h<OTX~+ U"V7owuB *o篺7]t_gp2 hlAA7 쇀;?{MS4װhʚlw+߭J-9BB41nTP P}[LMމ}xݛ?mM|^Mz矅$49KUΉ䌧jR@m2""0@)R4m.R%Z'KGڥc~$WrKΔrD,ɴH;Nq( X>n43cj'ɵP&㻊ﳵߌݼԬėL>J>Xvv '.uDGF. wm#I_!vBbof&  觭"$ٞ`&-ʔDYqfWͮ4K $  >@OV5 )H\?^Sf]}3~C!w7+ wǭ>0{qSXgQ,%2,YaVؙķ`Vo*0#p!4L'½u+jۖ:{9ay3;.v~]gKo$~]P^0ݬgwTﲲ]y81{Z:nJqT6!G`;X+\f%!RHDcە[cy:%nS]|9>%xߞ]ڂwGrS;C?\NӷAVu ȃQ(gBdcB΁J.9ةimCX1U I)@`(cCp 9E>\P`+(4tO 7*۰?nK?OS̡ SA@LWBw݂˽&ۣބA>tQ8F(&L9+dNC`"VwxwS'?PJ N'#6J2aQ#OFͣ,tX",ClkQ5)VnsFDD KV R)-ix|lGo_^ }5+&v4f֍?nz[lK[9 ^. j82HIsC !b0FG9j cJ=4*kc&PfTetۨQ1 >/NP*:vy~Y`i;yr)d~EYɳĊkz_o-xKn(Uyta`4 пwW9sAy_\"no ~#׌FY!sTXh+' yGw !r%Ƨd)Tཎ&tD&0Ȧ"aR! tiW1`"E y Қ)RRaqR"!,s$k;5vdQv`gҔOɻfezjs+lྦ?͙l /=%7sg\ A$rAgRQƞbb4Ɣ"tm5KNH`Ewo̘N:W_#$|'i55Qq¨xà(ZMIKU\T`*pQL !g50%!=|yYGu$Fdwj5ߙkRGRF\Gi5ّh@,Ya˼d#DW K5^vKu -s>~lWk(gW)VR\eNf=X>O^J?G~mJKBF78%q HTde#[6h72@E#EQG|a$5$Xq0/ ւY?g))B,} j: ThkLtkWrUMz7ˊٙQ(WO7K`&JOX-{=hFSnbPF_)wޣtǫ帋g"V[x`cP,U!8jx wN aRbRҾ0sFbkiw|i61vuv|5r3-n6&Ϻ3LQ:6ƒ)TJ]\H$"H͘w2UtH0&yɉկhKg!Z8(E|arF刍A*tBg.hΨUd0$5ԅyuJ$ś|w=_k;,j$|b".0YGH(򹖜N4nAfxf׀]mHdUR"d^Ϣ]9t&6!`/cR;ĬW逰YSY. )Oc.*5X@UnӰz`uvsltHow}_(t0)|=܆A6v>˻̲էa =>6uS_σOàfs00#9$,m``,cպ|7f-F# l.PK(ԴƌЩ) ձ b!ݴ/"Q 6oAFA; ,3 I&%z<:LQ̵1g(\#s @ XX߉<҆2G[#G"M@">?0ǃj5 h>媡;|COvP)3n !˝Ol;{$wV8X|!"$O E nXaBϷۋxa(R0?~ +zpaaka?v5{۽U;qIb"La] Dm2(bx_\>ś ÅÅ-Ӥ޺Z5]unۖV:po & [:S!+_"Ww7Fmxfz7LJ{ /Ⱦy.+ؤe^mh[Tw4f_w9A/ SSO_ڢX_5Y*߿K~ E'@O.'} 49s14}L;MN坦9FDP1 G#0JdE)NHNf8 +bGqQ0-yR\}uVC .;3i⥅#{|gŠdX1K Kq6,ѹ3c|,qIXg^1S ^]"_rДf˭D9/u͒Qeg('!~5A_O߼ih3< IP\jIBӻ[v td5RWMoǭQ[qɑ:-5PӪSӻOTG*ϣ_0a?Kxv)/e!W2:IǴXUczg^_s|N͛lW dڭ$.}IݽW+tU n5qHuۆi"c`bҹVsb#V,Kʊ,0qD9hkc>aLT3hax4].ţIJrMRΣ =Fh89ug.1 |#+`XR|6*蹨7WWK١R]aTXТ&KUX.~(Ӈ|0Bv($ fRΖcQho1gxDÊ4+j%xS]k>P.xo!oXT|êRi o8PG*JLjq)f1JHcI(UOȞ_{iaZ- @xS iFYšg#3$7 ubo.RӦgWuCiZС@Bp" g arVL—=Xm+ۇr,s2JrompX_ {Fzbɭ`:S`G},D)tHnN9 O)Q"{%꽤is}Jz#"8R7>֘#P-M@浗j7+&T'bE/:me͟:m'ΑZ%oOU9V $Z%^8ׯV`C>0IDo!)c3܏2RTidMX #tO~Gv]}|k__>-xZ77xL>FFՊc-c`lq0RT$:+6MO>HY7yzӀLZ*]Tɔy]tV`a$SG1 wX={O-S)FB%Xb-بkZf6lR8\2JA4Ӣ{b7%O%f=w'?-cUMor1k[K@'\"}$cjS X%@DrV{$&Q#MVLaG,K ")=$v&[٨J7Sg?˳~;ϗ'()V>b,Ƴ4r)޳4o-gR+٩NkP]LFh%O9s.[xWyy%]wUYY f#z =Yd]뽥cܩb?ҲjJvR?9r?eGwT_5is#|r-:~'W]}Ss{ۡdvn7ޏyM=軷}+W6o{aQ ZUיRej{]9XpXefQF5% 8JXI78+ZcM ~m'. d)jm訍QjZ.3"]1B)8HY-J?@y}L''.;pqJ/UWRs!>bJOЪ&DT̨hdיR4Ei) |v)i9(0"Q4>%`p#0, H@D5*): da2Pr1 8]e.ۓ2@@4lJZ%]h¬dbn: S&L#<7~^0W^\/祭GׯQ}9}g6N ^.^A]Be˧F%Ѹ(-d$Iuw63]^=3"Ϗ<ҫl%FYYV@ Fh|ᢪm1be30Vy*]yZMc`xcxVTY'}%B`/g/s6d}./JafVXXS?ZJ)"Rڳ n3+و/jAߊ4ƭ YI|2~ɜ :**Q.@R+is/#? ]cd *ϮC Ql}d?ZEuIUMzalp8ZA]Nk*䖮c٣Ts.Au,+~o/L_~OnT$CrYSO!Z}"_DpJ;1Lm2|:_G9?Oy9Jy3ϳܬRcWiIýĠ7E*;H3ϰO8v^@m0J⣻䖾DDzClpyZ\IuÏis")݉6ńd5yy%+UnnUYD?v PۏW'ۓ;?U//5ű|H%;ݑ`xqXΡ_=_Xs^t_O,vw3~vӼw;D':ׂ|N8k^ |"}!em$%>Ai-CE-)iX*н?Y;ɾGM b2c۸hEڊ7W])c⟐Lo1\on!Y ~ˍxZDGWJy+y+hFSw[tVw^}MTm tس| \Uh3*mvo-ܠAcp -dK[/;mcU1h8~s^[\^^xks"jND(!h}N ]rdr:M PG(hպYXMގ.N, 4xir!sszdEk})'U,PE)ڻRUTGkzur-h;ѯYx#w~f%>n Oq}'Rm9ZUQ~mZb 藓ro7/ 6ۮVTd  y|q~| wX(i4chvXJVT0a @Fk+DF%QeA6M %ʀ}& STcHXT`ۨ F BJcL!VlZ^lڃyxxxO5ev]rWNJ~iIcs߬yKbFF|ckkۤ`u78Zs !BʠD#{ym%k*Z#='N Y8 "+uQ'00EH/.k3heҚgF94ѻ nAΗzXbCFWBVTd(5J d t6WRDTPݧ'fBLr|luM?-9vVO tjUv-F5A#w-vt.ža-'v I1]tNvJʌC֟fO)9Vu)QN:eHRBN90XwR@}PR =& f@)^ym.Y @(HT45#JւW"jҲHM:gkTQRyd-Lxſ!E!Nl$N*k 𹱡6Sgj;jw˴]'9O!#<94#6P]Cїf}WẦ?\jAK1 :f*r,jSO$D&=)2Z bIEEcQ z(%$З#`U0:L؎4f3P,ЍXx]wM*3̮lͻtq:N^?HuvɁdIE)*i }NT:?peP(9ݶȩ0EB$V'ɀo'|(RZ >;|\SAfTPۏ=2]3>U霔qJ09袂HA&Xr& lh؃kv!33^tVkD-)}Ljr16fÕS}SZT8CcD #"3'YjbTG$}t}p mb"S%``}%8 ȶ!,,@ӲTILłSsfOZjez;7Rg?"~"~ԉqqsJf@\h F\qq'K."HClHz_^h\Hl'[͎SC8 a{ ЭȽwXQ $ُoُ[X~xߎkQn"슯I>;oR`?x)>;bJ kK-/٧j}̕gtv]Q WT|kZ?]"s7 ^1-ߏ<#ܙVOZAI`|D*29RI`"٬L}+9T6 u'M\W1_Ҹ{ R T*ubo' Y$i> P_`kLٚwd23݅WM&㥕b6S=3Bk˞Hbċ!.7 zL!iA-)# E}(8DObڜS>my{<6.a,%Q%knjDgW M8ld~'Kq5jUO'{kcɰx\0.S10ڐS,!$+4KޕXao2_gy\^kidHעeC⿞4F6e3* 5'>Z# ikO67ȣ!5C da#TTa 4=BDeZcɜNtOPiW?!ըsuq3648VE. ],lOጭ袞oVP$?&`y9[i2uH Gl\LgG?PB߈KgbAi|d>i6Ƕ?5ϪR1 Aٻ߶$G`M2vdgn`D4)IVXNB(Q _B z5iW=I!8/5*U.:\4!zLj4k(~y.g$L~ɖT_7o&Ճl 3R2c| bwv^-]0e৫$goa:Bu5)\[kVSes][d ևOЃe?㇃ʹ:{ɆIjk%hsFZVNq:5RdpqȆ1KblXP%0N﹙ *R%u3=}*E™,)cd I'YӬ U$ u+64ULlboP\ɟ@H}LJ7?bNo`$8تkX/1w`V{T۪VMkP5&m&6\Sr)3]B(%̳*je4k6Vݲ$Sd;kyaa"LRaom`Ph0@ NIzhkTk~IdAXIOAk FD@$J*;-@@i19pwʡAimNlkռ>3z[ hf'ʵL+NMkpU8KR'eyKM x3\ 9D[i JuMy79Cm9{vVx+XkeG~a  Fj*e;QS"'^qmA_k4ZRW~'G%>PƵ`1 YE9E$vFNAj6V%‰wQTT Ԙb"ňy(l>[PWMU#2<*lRXEd %"\[;Ѵ@kyi&| _O>]уG?f7;ß0߿\nJwq~ىKPa:y(8gXgL{>sT~)xP$i`xnVEUga vB6TnޮW٧QS1+\ŚV0۬ LFq)|_AOrK*g^eẄ́(j'fx5^7VᄿCO0Rbo3gzn<3Ɣ 0ɿ{Mp̢V0#)6gF4ܘ'Lhw1bvm6Ԥ=iD}M:Dj]EJ%wf=@6{.|0їX W^&V}x;> Kr} x m5[I2hS4R1EË()YOs./ O4L^o~Bp2gPJA!H֎HդUog6sɡ*NҜ1LscHH:\x)gq@EKws,7'[21#"҅"TJde$ )u"eְZ);cƌL"^ˈA&Z ihٿ5v:?, ^mwvKQ6gr#k+8IJf҉W5>6~|!9.ڟJe;;ܑg7fAJPrδ9M{+=兒h8@ovqa&s|KcsH_~z xLϤ fSsiѦj|K'._.nZ}"m:ص$ZNAR >+ f0Y}dbVMM,(VV(߅Gٟ~ `qu>ibWxEߥzTv{p78R5HoEfTS/?!G0OQ<+SqOܪwO\.w/Q\*d׃$?VoԥlJo ${Q,F )bWKZYM3y+?>hT[/ 0YØ)?W cBFb8FVO%#xc)ZDd!R&R/5eDDL ` Xy$JݐnT5vJq9loM鐎'L9]|z r2-bw,jֵ}m,`>.f4P ҀZn犹` ! b夵!0/(N"E"\MC  /E0x:x IL JS lJ!I~Hqi%@zI9?KCz=$$;gʪ ydOTZOR6: %K18h͕@T1YN#xɹDO$su*RWÚCD&M3ȳo[Rj4~uS,Ȯ:DԳ܊h`Jaxԫ?F[_khc9=ǽ%yRPO?X:m.>Zi\SU3µ&Y P>@Q .LEVE$3ާ~Sw4E?ޮTul9qjb"_ 7#At{r=EyRæU9@,glww߼~TTTѶr$D:kxsSgělYو=EzIFm@E%EQG swa$ے5$QaI:{5g4KGË,]DN'RB~6< `m\s; T<2q<{g՚*8'_*~gRKiXl=E\1տBjЛSj~g*{⽋;no/n9X[:uQ{f 0PA.41#e|PӤ U}s0ÔĽNmb sug㫼vyG|"6DCk#eHQ[Ð4R12 V^.0Qfmmn,Y`Qc [N/.`E..גܩ-h H6YΥ0EM1BC$]b;K̳.k "IJvlz$H% C aTy}8:Yy> :F` S\Q%sGPz1qY3iPJzNpX %ְmPyI&S&H"--@ڍE"B>uWl/[br9$VyW_QN鳇Dc$9ל1S6D,R>ň)#Nje3D+i#(bF1ZleF͝QFG J^T 0DWpkܽgSX?d˶ʳ;)DjzEY>f$^ȬrYw$e4ɽhXbI\K)[O2ʓ;؛۪ռ ˠ})wmI O6،ԏcukc7 NnB?e)R#_ I=LJ 9(2 3_UWW=mN .89MT$');ȼ9iS>SL d)8R|\I^|1 BRޝoQW:&XPg _PBNE|fsܷ-{lWPr`8ԹI MROn|m7R D$l7 ))ZTTTe{?W$U+W:[j=۝#JkK7Tf&434}hwHZoRV4'O&iuixIjiF&)It{WPƸ#GKqШ!>٢K ҀP 6Ú,50-k5ӑ m޷W8Րވr>fR$׿S#%`yWF?/??ΞUC`%_ҴߑM`2kg&[quE_pEו&Q%h‡Лk&amֽR Q.M6=m|̫x>~ǯ钲`+w_ @p;rzw[Q˹_j[Q)tШx(k$31)]%g^~{(<8nݕ|t  ~} r{~϶t_ /yxݻ<7**U;:fYěo?kv"io0o`>Qyshh0n cjzeD*g׳}EÊNG R J6hvSo2v}sj- $P[m eL J-m_I^& m=uϺQKoLT9xOh٪ ʥ ,O {7.R҉ۥsG˹k/}=t,nKeyȂV1טXт$ .i1-1PFi/9.YokxnGn/Voq ^s[o[U uލ̕O]b)qc])GuF'DAq4tM8>e9E`l)#-tu߯}};ӟvlh7VȽgC#_{CZ$a(fo~n7Aj@\}.2e<' l#suPY("`Qg>e'SZJRbdqzeL8Pbőd:g d1tku>f>ɱzMzů;>t~}ٰ]vk{HPOTr>- ;Awdk'1Z`٣un$\)^CxN:!j#cA1*"q)5ļuxˤe t zDb eHч8rYJE,;ީM Y*{/؜}|eb"$$蕴 )FXy@9Xċ:@deI],|s:pM]D/%"HHDJ\ jg-xZRՁ,Q 8}ge;M22M&TrURT[mq'4aޡ7Xٽ;ԔY{ExΧJN1~Ws@SQ2͓ӊgA,8( z|Q[S!֦, Ճ%P]c974sgRR]kȹ[3vU:ӅqƦv ^ޫ.\PT6d"ٸ8*9,Ws}`&5J/X6Q@rN͙FO£ N52C$c'dB66QCАpduYCp:G,۱5vsWvgܱ+Z4Ol1A*#Ybʒ8;ұ@+Ѩْ" S}6>8LȐYQ!]E-Pب 8 >dT]ש錜acϏ\Ÿ+3T#5kĽF5/aVFϭ [oN h%:%kហ#em T#:ӥПFS[d@gB hc$KZp%c9k{4Ձ⸞ .:; "EzqoEքd".pe[6jn9,Xmm- i{xzqgpbWܱ+oA-,b~E xӽ3",x dOˉ9Xʞ*")b<7Bes>Z3m+ik.K `]@ָ Q'ZC2M555|3 QV+, %XJ#?NGW9ݾnzӣr=ܔ+1yJ6 -a5-밿Ζ&F0qD LbT%Ɣx吥sA3uɚ33Q|1顖IXE "$^I2zEhiZCC^ML{B&'3LΔJ*M:cV^\|Έ-(KBjWG'Nmr_DŽV ѯhJ x_eqz~T9~$wXY3H+swQXUIod}K2v8 b4\2qxOxc< !]=7jrq( sX\N.H ^>]DL/TxzF sJ®-6SZ'ޖȚ%jyS]}ctvvU`Xs?;}ێtc{IT &Msbq [Ւ+[U͈+,Ih~,XV1eV*;U}ULe4}1a8⇼K_zeߋ EԼV7ӏwq,\ݧ^䲂LEJT4FY[7 U*tUv7hoWw+R'?oO'~ {O^h^&M:$(Ʒ&P[I|ßhu47oZZXi.liWufʘKNg-O&ޔBT& 7E+iMj]G.٬䟅,ŋM!\[+r"TM 8Yf}IRFC{ڥn]~hq[f*0fP ѹ<F&X; $ dx11dSgDruEr.#(eȦ[;;ȴQ^t%i /=N_ԳQD5kee. xvYTȽ&9c;u쳋䱆eiLS1|g'ɳ>#Rd@ȲK8#Jtt>io4.1q {BT ̂V"Y1čNuG{H*7]eG`ą5'?4G8'c*’[9eldln=ݿ[;p3h|wOmԽ횫ΣjeCb$wQ%x.+V T:hP`ž6lܤ|[6yPHdN=m+ !+L,n5)KD\rmB*O܍n*SM4r(SHM6g޻yYهE1Jκugɫ]-grqܣBGKtTI̓v]* dTf-N94\Pp~v|G+/4Px6]lGN!1KI"3ﶣvI`;*%jI.4DYi I6r֬wYg7`V]h2Ѕо@Zr{$tu<6vŰ|3?=D.\R膙q|YT/?ɪ3o`>QUt3VSW/H=[eR{={ɰttX7VhAhk: cw>)ݸdm\2Ct< uA,# /]koǒ+{/«^A~YCGĘ"/z,3"3CvuOu:yp<8y6Qi9Xd0V$2"RDyu0y.ޛq&nHxHt#12p k< 3 kaa2=y؃m=.zFy:"=.苫b:mL=Ӷz<Ԥ-ly!ei\R'Y%)̂[V֪}j@%-u@@zz@̥$˄)$%ZGP& F A*5! rm@@ 'ADAegu6<7>ti]1@{g3(@|w1?VqZV_7V".Pҗ$i$<d([^[rAvUF(5r5DꠂP$\S15*S3wuܤt]ڬP Q)FQ2N2xQ !霹,ǸV9kdNkQʻqK{'BOԬQRΏpR鹲ӊ9zSU7?M>mxշɇdY"'ո+&#_@܏7`nӍ't~}V5g,X~Bm)[k,DYܥJ_C<\ dGP}!J굤4eÜk WOP{cD(G놝mGuã\-<{T^GcHX` (2)RK[ǍLZXl ALaGb eHч8rYJ>w*h8<ڙ8<@vQt1zL2n\s514V2"g tQ,E<0D)c͵(gO#l%x-G@I[:Dƅyyxh(\۬}(֙<]wL⁴ު  v]iVT>z΂sRPpOwj59+ommʒ 2=mʤHs>2v&#juU:iSmncdZf|bgV~@h6- `. MB6Lic#B[z\{Q0pX `q|L (fSj5 L&ߎ3vӕ8[p2I\̾Xδ/VV{@i|(S'A ,1ei:;ұ@WNQ3%E\0+m| Vq !DC&!ZQp·|HY%ɩF}3qS_5MG aEL?j] q uXSFANؔ%EF@ S,gLS\ɩ-e2 r3!h RH1'-Ho8ĒֱEL-ҫdgbr)ٙ<.rޭ]l] hZZr"0/۲MP.XA[ Qu.'v}ip &lYdw.fsa.'d`){t+P;Jzyrw[/Nr&W\mW>jl x\)r}bʳ2MJ=tGZ͉(Jnf8aOi'KOGeBS:8e"&1*cJdrRٹкTD23 "?B#WVhgG]42zEhiZBCӍa"u;X]ds29 drTR lRDGገB":(z\f7_tuv#}'Ϋm,,D]ݏ|wP͋يxe{TnifwՑ$zZqS Sf1rxWl4NmC2x_?8Iά|f1pK@ 0z\2sv˸g3<㋝ٜ ]w =V 1  Z瘱Z\Z/M4nx 8( r8!-UF載N~7Fyym/u`oow~|zB+r_x5yVkGvm vM3Z'JF F5ednit>^Bk&DK{mshqa:0[!Z,9&y<:8g3NrU.I_SGqi}ٟo&Ѭ坻ui%|eKOe($U#->UqT+{ISk88:=I72ݛ}ۃ7߽;oh9RcEe3i?~FӮ˦G4E^ cU]vyMݮ2JqjLliGf%O'@=~7}tM|Ŀ&/GR&T.r9&H+ AxI 3rB־fRKR7Sk=?N/'gܖ73I\H"n k,; 4Agcb|dOgӾ$0uOJ`Z2"wtMvvjN/oª!môȴØRl'm.=T$XP@[v^Z 'kDΒBK}У^Ld1LGXW'Qio 0JPNC92 rJeV< Lށäd))т:j1JJyZz.J>K )+YBqlB+cC6XbRA+s@!]b$&UTvG{N) 6`hd_#蝀ds53s@a)9֣96vw@~/0„Z @[(`/MuP1z i*o?ai|j~~#_DWٻLC1K?b#[oG?ZgSL\kq|ۍmBdTLJq3Ӽh$Ld*`$̳5qD^W,ktav e~d6ҽU3t`g)? /XwE`Dpc_{Oӯŝҗ \˻U{>3?Nfݣ/#GgqpaVety~w"m.h5^z:=8*i _WXCvW^?N?:ػ@"eր^A _k΄ߐyeRk{Ofyv{=QKoLT9xO0€>[AAX |!7)!4$X.#*nW)J=pf_L@ӓ.?5oqE7⍳U`98i|qn5E<|u^ćOMyFLɭKI3(h$\dt*:n@wl~ s`ߢ7MdMm<AGE䖼b3L·C81{x (ΖM{l!?]$fɠj)e;]du1>(|U7i/6~} ȃof $C=2A\&B)H Ma;|}ݢx<ԺH념R=)JBV兔-q90* /Yt޲V.t\ĵE ՖUYUU/9*fe2͒ Ĉ K IIB6H*Fk[uټ}/߻EȻ"n]1@urM8L[ԋ({6}]Vi#H⊩dTM5= gxM2'͞;ґ\0w ͕[ b^޾P+:.Ɨ\,:uР*4uY/ac"0T6@NzdF@XjJYg=rurkahy }%;WzxYӃM3]Z:Q Dt~owv#k*t^gV1 0 Q2Ct< uA,Ϡ# - sA"ܐ<$*T2+EpI)bY<ԨHy<8dѶNmP  ҽg X3/au?];=(h"8@>C(m=E}{][ 9\G]m eD\)b;[x=x""xu,TLL9U&AYvqD i]}r,hNw! 6$ 8* yH\EgI{9+nKu䃯`7z:A=%9y$T7٢)dիWwh%{gqoR0#4'(xAc=Bw˧lժązV]O$RK56f smob~]*16*Fqe5[9-fQKY$d"iY"M2>3o&F S/ԋ\vŜϣ3˵hJF*!`t4 1E 62(#/*GT KIwT!u?kwpg"6^}wϨo)rs][oeYR!ϩ5 , =J ]FΥuVSdd}1Jw߇)κ%7,:I30͌E PI_R]wtKVg8FrbsbFD AE.@:!+L!HYuOk5f,`ZFLO6hnDtl&wNKGW߃k=O m&oS5}ܟ |yj {7op%"p%dva&{{@R=tÄ]kHDmbV$m7_$mNWןFga;=@Wcv;^FC{7s02YI'׃/+#wCU5[~^sm]7]tղU8P[!nE/DPL_yMWַ]ּTW#j,CYXn+\\j}ørW 3YMe͇2/9ʘE4Ȱ3XZ┫>;*xy E IO~PzƄ)LC&EOV, qq2rDcBjX0v*3,ڧypϲTs$s^;\ 8/+hF#v,As.!BIkC`^8Q=G8 GH\;H1OqBu4”rB"Ca8A'!6!fe pl9@=4ґ>Aww-<Z;`~z3F]YN: ϖDz*(RK7x. 5WJ` "Β$w{ wg=<E@85#V))@\`F+4@P8FA\C`=$uݸQۭ$1qܼ18[OAYT4` V>е+ AR +#(#^0(`H̑BZnڱ_˺Z4ٶGCR!,knh# idGRL<]rSEwٸ 1(K/q(i)`*NcL-0 A)BHƛJ &$0g"컷NLc'T/}БUSFʀWenR]դZvwyfҿ><HQ3c?Jq/y,OLF( & $w) 7M 5uP۔vO[`UX&߂H \d].Q;ܗR=nCDcLyYT~:&iKE+5g$];[Gu)]p>ȾMغ6kkOr B,*Ug7LD\MTiuId)"%j|B!pZ!AR5Ӿ ^ZG6Atq>韞NqM7?͆ trTZ|98n_ҳ3-P)糄UAX`t5r?xJ6f J0n9Ngo#$Jgvu/V( nt$ X߅Y0qHW__lr?HH`{Q*Nnc znQ@w_ &NvIIku2&Jvm{5O+aa%B\+;O*Yk{rTF_+gí tJb5bO%,;*2rذA/[$>djLt/ǖɝKϦn}mB wu?_j݂ۛM>&"DCk#<؜B/ɅDM"(tЌy'S|cnN^G'_hGs]9~p<\|w`B9bc:+51#: 3V+5 IM#u!zcW^xܯ3lgE= 'IӼslP/ieӑL2t}q+ui-bPFgvb/q7cyj%1 oƧѿY.0 p#U0S 8QBJ NbFI_"wi) y9ۛꬓ9cϯX!:Bc :j<:'Rd<)oz7`;i(ѧ9hYս|ANEf`;uۮod*F:V[,|X(#݌y.3C)!p)}?׷T_ط-W3U]* xr uv@{tj]]mzňϺbc e & ¼&BP5; UTsc%bD &' Uyn!,I,~H]Sg(5򣜊Z:"t96?[>Wr=$0HDXɨe;@P6E }7?PqR?fN |Knmu=ܳR]\ͻuպiEH-Y[_vgc SgW ˊ+h3bW0=#*йkW wuJaAGxf,Na(d]y LqY̘C:,Ncʠ}3m=D֌S2B٭(n+ϒm0}\>ǔSݴJ8PN|loպ-W7{hMXvY(/ꉳ ?qRo nprrMTa"^8s8q76x-[>I(~s2$:#ES1A8E3& e Xr*zEM4 ,&t;v'H_.]mK)u< g)a:eL.ˑOe.\"ͼ8c֨L²AA!+Z*cM&KiQJ' ]iW}5 6c`E1kaS2X~y\gX݈&F{2lʾh58O_VvE%-' 'agԕ)f -:twa0d0$^HL2akLRgj{v=7B<i>ϩVRnj$7.(]cg@Ԩc).ɇ5iݜ|OZvݺ{cG[G7hcm f͸! m$- VRjh) AF5CʻZuC)hK -涂|^EbT 91tF 9yRp1g ,*R>` `ёJbT{:I]Z}W;/~YP4BZ% l$S. օޢ i TTPtC[B Z 4BsX?mm߮8" Au|"i#<*%~YZ,X]pl5Mm`1גuQ6`Ņu ix YZu˶ p \5ƭ`4ek֣6JQ9ԠMEX#6н[ fpydS6\baIHuPvAjʝS3)P&t4 NӒ`AV5+-w@i_-P^f$߈X8h&[j! ]V۞dW1w|͡7CAw^KC 2Pư)0mk 1K @ M%Hp7sTX)𭥘ƒ s<`&Sc#s5tPk'€:#]!%@ `  `ɦ= plVz ޮ ,%ZM=n+XBpSPhl NBC TJ]d-׽( E}k<f1J%)t ueFkuD$,5K͐(ѱo m${µ8b!z_P4yό>PDw~Fw~ R5xld1fPTuUYa:i #NH2!`d Q델4e_ Gb1>ȚA`EHN֔7vl #+Bj@S AoZ<'\(FlnT *íKcb`XKEw,*f-$w&8y;ub٥ՌHҖp;O]~FzOW,*ε$R cV/qa+n8튷p{ݪ`n:n|~r77^m|Ŗ ݯ`*S* ?n֥~zg ?YOY¤Z:p4gw1ZwK0ʍ1tGZ:$$9 IHr@$$9 IHr@$$9 IHr@$$9 IHr@$$9 IHr@$$9 IHr@ǚ7ab9 MqFۃFLr@GB4]IHr@$$9 IHr@$$9 IHr@$$9 IHr@$$9 IHr@$$9 IHr@$(igpyr@OZ>Ks@A$$9 IHr@$$9 IHr@$$9 IHr@$$9 IHr@$$9 IHr@$$9 k(HxP <9 4Mh-|(]1"گ$$9 IHr@$$9 IHr@$$9 IHr@$$9 IHr@$$9 IHr@$$9 IXr@o'j?zo5^o[o{uR/_\?#q7Ol pc&p%^I[ZbKF=tY6Z\O>c q[+~Ȃ=;v;+0~p왲'P|Ǔw7 @Jgԉrg:&⇟~u'7M8=p~ӥ;հwD tDF=oWh)^胎x|j}OE+?k%; bnI9@[y \yygݗV7cTF~cCMхm󶮵U>/{Ǔ^}?ݪ1-т? 4}X n:gJsWGʷ?^5W//¸־"{Zwojt-YV&Ud.+ލ#hUnUl$[WeOaCje-+_Ɗyj (ġPM.}}ZO P1)E&+ @q\,tC+|#tu<2B+P"4[4A[;UoJ^wcvFk%wv#v%o'`'D89Fm2ZOqr{En"`v!-EW WYlP:BG>T73zv$Φ4gS9sK6$C}6/_si*f}{NpMpH5]"ՎG`Hʼn0 ]1\ QF:B^(Ygiu{u::]1Jg|䳞&+.NCW 7LCW@?fdV폑B$DtƹiRbV1U!DivE0}$qc4kw쵟+Z:x3(<z>Sl`:thPn{zpNdiODw7=wCdwBӁh M? 9K>rC @`;_:hD OKzh>(c쟖'+LCW w߯ ONW2j#+kmFEX\`qe77n lEZ,]9zڒ-Ǒ+l3!#RW`ˏF]rXWDG?]]*;3d[":uUp,P-<-"J3 ?"uE?v {,P S줮^2\3G xBǢ 5KTW+&I]<uUuXQ˹>tuU\XO+Ǎ8"uUvdz3H * yP)I]}5 6z8ce4Ȱ;j;rUTf] Nj78˩\rD3)Vj-QX.ā ~OQsm6kS}6k;'=+bSqDnŲ''=;6g_^v[۱V5uvbTu|po/ #},JA:6@+%?]Q1y֕NjiSw+օHi,$댫v SKz,(KY<|KBvf]Ubi*{;xSGu+C**\rƒň7uo+;TOA=^/Wd=/ݑ-?qnk>}k#vwI*f1ZMU.jn{pXh<W\e!Dm6|`4ژ )-wbFg4ج߮mts 5$`GNxī xmG^(MQHCIKMP0%@\@pZxDnӑY3s,v(Rje QiaM7i>SLjɥzæ,IZyA2 \(E} qc<.3w].iO@t,ؔdwB7+dВmJE^/<Ӭ wO{ƅ&80k.8ݫ3`ny~;H7\ڶ݊6ݟfpZB>jC${5l "{?a >nMadhwߞH")uQeZy'!b<%\w??wdo8h[XoϊG-M_Mp-[7dSǭ Ưڷ(Ƨvx3sρ*WMa׍[<nt ~iZC?웍DSw|ѺWD[^-J; *7W0շ}|?ht(έΛwOxi8^7ŴNwKkg ~OfQl| Ad$');d`H&2\65ǫ5ɑsM@}lf\aa;8@J"X=vRj`;*9R@JZ.ՐeL%횃o1I& +DwGc_oy>o<};w7v߼9xj'ndOK ,eςR\^P;Jz $T1G飕FwԶ,I04p u)B9CxHq"%hx1r6tq̴lN^ůPҍmZRh5ޏv \IJd ~5-:*7y)8":i#Y)r}b3. &g|֕HUo&UZ3,>"ЊUBpFiwVLجkC։ Z>panF 0[K}nN}\T?o^_L^4Vct}mE̥^׋zlsWdf7غ4+[u͈͘,2ڄAh(U|Nz4o{?9V{i'׵JTFJÊ篃auzAng57 WV0Oߍ3ETU+֏ן_ӷQ,QW=l[ݲ.!-Pa~jWZ4F0zTQQT9x8uu+v.HHLJwͫ^}{h3#-UgkHP?Fzh547mZZؠi.d7iW55nʘ9T+ޗ L}nU@좿u;4 >Z&HYcp,ʼnM!\[+r*!AGBE! />,keꄑa\ڕ~/UDm_MN4Kϸ-33I\HE&n k,͝,bLt6o[?Ccio4.1qHDca1r7j P/}4LfpQ)4D yTS,L6Ȅ\t,GuH 7ANr8-HFRQrT̳{tg$e%KȞ>[auUYtD敱!r},> dmYQ|v*qS Ѿ$;աt&؀!qaQs'?4G8'c*L6s8ޓY{6tf^Π;p?@,[<ِ;ptV>m -4 Ĭ@"m˴mzRL ɢl$&`D`iÙr'\Ps^K/֗/whTΔ`Ģ- ˜DLF'S,6"k]#TdM"_.qV"VRV ߈rMz2y s*%F}k%h is:Ԧȃ $C=2A\& !fSd %$h)!m.TriVURN \^t|V9I >]`5reD+B*Z6 rw]릾?k8 <s5mS~y1o^y1-y{ZXhLڛ%9?Ds(cI_)-xWvW w qAO a3]:hQQb*&L9 xQreLd,8ƵY#s:Af} P -ׯI_B˲ϳb!o/?4:E$ B{B|\wտ>զɆ=(>&3̙̱CIY1(Gɴ{+>%e?\eR1Dmv,c0SG]b:ne2Dl A' G* l!\} TtNm 4FΆCYr #s,)im, "z%-'s5ȖchdD4#,'JVOE9Qe+o_ <"#$Ql)fcÂ9 *Z/+7;G8~]{^w45 `(`DY(wo1œI:ml$|*ڋr1Q-d`{> &CR)dK&ce1hl1raO Pvcq(Z۞ɂ}Cia1A*#Ybʒ8;ұ@+ѨX3ْ"ר>JU!eȁPthH$dQ 6j2\?|HY% Ta}9 O#x(1U#5;iēF|]VFϭ S[oN h%:%kc )m3dgٻƑ#+ ڐ~Qq;b^{gᨫnIm67EB@1=3MSS' Y˶6ʈPBܗ$p/MyԂe}0T<fBs0#6Ɖ]gE5ڴ(9)m)yŭJ@ϒ>{/>a5qZLFFPF(ѪXbktbS-|H#Pm;\#knk֦z{Ȋ1rFQAOCwe4,dC!<7A;L IZAdqWB #Q}Ӧx3!FG̠y@,êekdVg):` KSlѱnIyY*2y_F>[izQgDzcN1 ~$Kb4 H / U ^C^%[I"֒H$²J* kÙ<fB* SҌrH[i6EJ8.pR$4VmUˋ`/}N#}8TƥOϮ#:қ:zF"9C , j!#U&^ lXҾzELe+Rn 1JLƂ)S6z.UKI2ɓ lDCM2Pmt99ב;*O^=`^eLٺQouG$+&ۘ9tTWϽKu 5Nv}am^w>ܨ_ѡ<0K7x}Y_i"QյI ۋ;mZ*@jE&5;Ԥ`\|8#F\ 07`Z_ـA(77uO⌀.fvY\ߗ_zu¾<}[DÚ5@G-tCKڸܳ%oЂH٦Z^qJ㎱<>p<T]8,5.<2HTeXgEj0~}ysib-hbߎ?t;zrK-o/x9)O>(BRlD[yH9ʜEIM1Kdv`v`Iݗ]_ŶA]zp‹'&xOqŮ/JB"G2_hNUAWETg8R gx'v  HT+3NfJ@X"*EK" '^ IflvFjL0O`5ɪL=2:#橈>pJFRT<7,qSU~)xc>}Đ" ܶ-³5 gi{hUEc*ּ1^=^3ڥXDu 5%')x5%3tp ]!Ź63+ōCt ]!\ݙֶ͈$3+ cDWG3thm+DtOWgHWFfi sB3thj;]!5++Ct ]!\kBWcxy=]!Jo5^^a93`OMWtZOCWԶ]t%+]%3+a}}1㸡j(ezZFhpVfP"%&|; ~~?NK#'˲.%PZ7(im:r^ ~eI3`O-.Lg xdoi⫚16N`/$T Z~T]r44TCK{u~Z[y]T.༞sH%2p|AWow$fՄb?%hw;>0/̱6./٭MsTkjUb{ڪ4[''TH5K;#jg(դª;HWҮU߷GtutKؘe3 rvBtuteKqXw팺Gw]_OW!]!`g>*Q~Еj?yEkNĵZKNCI[OAW^խ<\v2Ư[MPta=mT}ũt͐ZKSXтM[p%Q] H_%Wn8C(5VweTeNAZ[OWR(]]o\9r+"#UdQ@ >,h3K@n/Ҥ#+ ѕ+6hΠ qU]0ЕMZvWWrmŻC//l_]Y]Vij?iefP+t䡏3lWW^jș5/b46 RbyNpګ]t}䑥A4h5b-(y$9+?ǒarrd;9k7_S즓;B'ȓ EW ׺Еuit(L:BdqK fJ +E{~?])@!!`kfJЕ%Z;])013 m.VJFvR%]y1DWl7Rzu(LIW?H(]fJ>wy~hiJQ>PXI]wi3tpls=B|?M:B z[  plf]Ѻ՛AE']#]DnΠf3tpf̠]Ayg(JRXoMk+Lk;\xʛ 7/1՞pyh՞(e]ʛ41;ŜXsƬH~}ڝE%1`4 n;4 ~4 &M#MS>X_K_k?kL|P8+M}ɞa*n+'jΐaK{ԾWgn<=Zcc]Cak~:Ņv4݅|S_]xի;ݨ_;?^7нr}y9o/U*>w?ПW=%K|]|~?5D1~{]+O!ov-đ%1A\o>|[ϼݻ~s~iX> j>{,.fO^cz^;֜i[ Ńhv^ aV>zB,uiQ>o>5 #7}롟~//ӟ9l?^ΎYI=;!5{XFA|l+6Dm#H ɷӇ;Hp7h_#uQ_wf`.N߶GVc%2Jn޵(>P m6rrl$܃zK+1 YMD.zor3>jnx\i!ק~Rg[vdkO iP[؇iXsd#%;pvybcLU9:-$8pH;"ԉ"6a|iǘtЌ~4G%"Zt<͛obi ڊ!6͑ٲ/PRvX܍)aN"<=X8  91ZZ]Ҩ%gbAL~{&>g-F=6xDO#\Ct-e`031MDJh*!q4b`>4!F%;cDu}v6!&SZN{,si!gs>c@F5y md ՐR"@i"%!BRRZwuВi ؊>$g Ĝ`G>T-%E&@ڳBAnI2Ҋ :#eohbJ~ 0D`CX[ aAEEg; `0OÇ hv L7Cr(iW$xtypl5O%;zP6PŅ%̆mc"П*C/. G@PE#ac6 6J1Ž9Ԡ]ET± d{{C|#Za$@?UnpuTK`ْ d䒀&t$!$ d>pPT wc@i:|U Tf0|ŊdzXE{jPB]ɳ܁6BnTz^KC`w(c¶oY@0`=ePBdBEs 4ȆA#quڂ [!n62ب}+%@ Xm8ݦ>|973ɬNAsDH7|lѳ>M5ePHu;5WZ$F0"e-ÛAY>Y No{PFvY5x7LJxKd€r* ~Bg؍[Z3Ltr5wt R K3HρO8R|,6feP >o"8$`uSj|rwp=,9F?"Ċ(oV1 /,ޡpNmQHtb(5#҈^ :HC[s M)68&x hN;hc3ZVjz R%_źLzd.Z}ڗro% '?joܘARwU\Y36(= 3A'Xk&8-owbd=pevz_hJ7a#EIӡ|m(utqH:ێ4 a2"R= -JR`e";AR+.#) :)Z͘-, v wW<"F()H0eAAxKA\^n bt[ۏN1tZ0O;?m9s~d/5Sp VOs4 ?nץ~W/(n\d.UWZ R Wod'W`BT*YKرUR Wo8Y`0d ̅?WZ}X)pJ``  \%s :JR}p$+ c'W`x| ddW`JbQ"\)09Sd D \)]TfM•= Hp->ݕQ ߞS`п? obUZFRު|ƘL-aUXszx=z6~3!1bPD X2Q #S,ŁӤwS')"NUhaKI+RG> dicX@?YrXM}殂\T0\(xA&A,+& }:L Xa![Ӿ[yɱ$Q -9+A3d8q)\F ͠[V8b`'l?ys]!>yv襵imr՛5"]J?I q=Jhb͕OueK)xu^Ny ˥ ;\K? _0^pߍ_CxWNOǫ07rmCN+ u{0nai [6trse G<mڻ` ¥h@\gj»ڿh? I}.AjoMҶ`]c 7:-vq]>}%7f|~E87ՂÒfb]0Z?f O,?A1t/n:7m ڢm;`|<ꏪ(R_0pa+?A@,)d.=ZvJCq"q%;g]A7Mk2\<(? Fp? #PO9_(dAC~=wTZZ#N®c{طqfe~DrakKbޢ'>ȰQa]kj1Bh)S&(2Rl qJ<ȿEck\cnM\`CX"z> 9}v8?E Db4ߴFdH:G-\Xc,S :]$fpwf x8jȟ_;լWj&XsL9jJ-mю6|gueo|jÍvo=&vn^7^i#)"xr:͏rsT\N҂1L cF{K|fo99rIA4Yn3NJbFk)y)o y^x{kj^(y<$򷾾͛:62ilKӳ|K-w=XD, YgE5zuPX.]8s.;ts$Bι3u+^rT0haC)+g2SL]Sψ ͟1 >)A;zƄLpe a0}:ӍE!#1{#>UL;CӮcڢmQ5 m h2+k}oa%]쯭YW= i9R^y:v,ЀZn犹ht,X9im g=8#A!v0 o<Ł BdL JS ,qy(C:mCt XH"L<ӈKI I!0KZY.Űqc`̜PZo3FHo*;Tj[+lhTy= |;} ;/ <VPi=tv!o(\jAkT:}t|[,Inng8\ =k'#׆k, AR +#8#^08@|$H!GγY?Xϓ~d]v-]. m3INQ̮o8 Ȏ.@#0ry$yk,pTJ+ekKNGBZ𚖈qĖ@ օ9e3+cC :tkbct o]iYl=}:qt/"!%h۵qMcS]mBonRbZ$@ަ Skwu`Z%tzt66HlRDtNt^Aʧ]1fjDnGϬ'nV}2݌[nMs3//6/<ɯ+3tcmB/TJ]\H$"H͘w2UvH0&yud:(!E uox_{(TY騉11 \МQZɨaHj ˠ eQtL[KݟXF;oo/ iGul<'a~c ~ѯV%w{~5]ƙ+whI‰"e@*ha߈pvY} aC q$F^xu>Fj `(Ee^(h8(Ph  +7‭iU"bK(F[d:aFhNPƎ]% r=j YX4nrmڂ~iǒY$>pZfuفiZtbJVʗӾ/FSB2WexuXh@VhyaרH!Ʋ#{G#9Y|J4B7Hbc80JHcI(U0=oS\qRP,/V'36}$5Q{AdS {a}Ly ßg:3WYVr;L`^Ԗ DZo Y,svF5w .+|־A +IBjEe O̍kpZ]U~_w3`z/g7=ި ߙLeՀ;A5ӱI-E}p<)a=t|o*87^?z>fpA-m*&ɭbEtVe(%`ƳHI(Hp$/o Ln߅Q=*,. IF쬔&Jz@1h$?  %&yBqS}a c RQdX`$F-Ƣ9i#H ̀]QQ80I$`bXX\rO>QC]gNqStrdxwGf?ZH9 ̀{9E{-8x*;`jK ^Ȍ ^`A Ɯ 52vfvdW ;b eƒ牻5&xM,\sq竻Mo f0//za3椕X 5EeF9=B*Ņ!&dݞ:@D'ZK(IItY$) ӁiLN3s#~b jw6:UFn6Mi p1,"YYc07\ lc:Y1 !f R ]8Q>r ;3g;֤~X/"QgD̈Qd)7WY(#QJ)@k0IlUBBEdkZap/Ą3["A9L0I0SptevD -b5ˎ|L{=qnq7zRDekY'#YB/{H}CI N/R-qE IVO"CPaɜivW_*JFPF xhN.B.yH;a{gq Cz.yio9k?@`/[~Ff#XTyH&;=YE-JY5;=%v ܒ.j׺;gb5SEHkW|P΃BIDC GÝ3Dyl0KXI# '76(x%LZ|xV{[#gC/Ź4Ĺ\ Rp'qC&nBMɜRaZnpәW Ԥ1詠^`y@rHGIĉcrR|pJV/u|0skBc7AhiLkSwbt҆ $X!5Rh/Z~,T CcNydV̺QJdw5T[x!Hi#!Ѩ@wZ}vIz鱶]D8eB6J$jkq͌S$g_\:n4t.#mcX@m Q`֩s]R%i{3TWJE. %0!sðQ)lrVr]-wm ^>q=J' 5.ލ>ե{UlZÏwWӳel1Z)6'}ӠzV-]2W2D[\~ %C[ruKMͰf4,/ȴR>Q̻h8 df?]9MIcouɦV*P d;x$7,qs4 F=Qe_Y<*~?QY-{?_}nOBcvl~^Ȓ*{d0J^}W'PWQRTY WUu//ϐIB߿??|wݻ'x{w e  "᷇ @xW4mjۛ5͍ؠiu=+rCݮzE,Ead.G%IHo6U6xIB@3I-( *s3! E:$Hml 䞿Ѽqr&;BM)OLFk}t/pmp8# fq"Hhwʙ^%nmMljy}[Edgwʍ;:xVUӸ(ߍ2ˮq%GE}Z) {fFc:G֜Bvvb{94d$$bpxIёQ.zbd$̫`]TN[HIBd'F(ɢKFVZ#gX5 zOk?84칇eJ q EQ$$M6$ԁYBt`%) vW8e8% NAX+#M$e΁ 9ub.qJDmSD Nj5Ʌls:aى% $#[ق1ܚՍ 8ZhdR^##k(8˄JJQ*BB茁;#Cui~Ok8oƿ ԙ%usI9Z4O߂1/ Qx9siF٥2Xa*trd |pڏ׏:yJA&.-g.D,$aDz(\j7T2qn.:`Q&.þC{DfLxhZp,(~G\K{h j{\m";uNa2Gu{2S/Ѳ6<W3]}lgC-gC_k,_=JF{Cżvi"մ?ɟ{7~e>P]$; U_F )'_уwwz:ּInNcM9ؕ^[/eMVf)빩BЬLP?~ 1˹Yy* Q¡~["'n ).~wpNFuNq9 #)ixN[2 p[ "1#r/ ]Xӥ;@Io*TBx (" ODH<a U[k4m(m+3+W& R(mF o4>BF @"ޖG ֕B= '[ 0-%!:´f4*`,2 d\%G@dPwZ`+Z Jↄ'5QLVƆ?)}nUAY:iP9qYWGwN|1,yFH0drx.*>d* ^aRU&Xg#2J>q5rUr鄵WG\ .$H\e}6*kRwq䝫ūWJ)8oԓgo| ?4,}9C*Rxŀ"jI /,q0' yХE#z=`UȷJ,Bp= ?Fw?|s\sbө͎OsB`nrbOynM2E *Ub\*N>_j9KzI=gTĞ[#!p40Y^C݈R9Ru΀gGL`&Z3&*Ψ9sJ=x4iwϖ:75n ĘkpE,YReU ?tr4RKǹ)RмybQe]׊e9JA :t)PBd@X(!)DM:$ךxjzpJbxC-@(S| {KD9BZTJge5r6Dl~ q8_g t\|y};&Yk=uMV7LޛP3+6{O.yգiY_ z{Hw(CtYZ_uL R/cKϥq8{*Yd}~4 d%zn^S'ԼTrz௹FE7Y46KнzO5vnwA~z&i|iκ?+ir<~uE֞$bwyvC ՗[͟szC3V<w$W\3컃{Rt_DŽnJM;+5,}^wz1B JE;! ic2!RdN̅iJ4E$a .\pMǠu*z#ytEowr{_YD_Gt鱂'\@$TfEit J;4pP޵#Eȗ=l4f;d 60HVD+~mvVNVjvՏUz E*/x* e!)ֈH;gD`ȈB!,t,"Q(ESҰp*iM,w \ %j몃EG~P_9kWӣ:\KĊ|ӕ7i63<-A]Be陵˧F%Ѹ( ([ȀIn<tx=BO<_H@6`PbewNXa1 IAXiY =t/JΤܱ }^V-Zdv[ š*Cu'knn'p=K]xMVa_%z;4=YGY $랫 9EA(uT`٤'J1PDK)ϕut-m?^F=ר&dFӏb =eMXf6WCV w4ib#5Kwo[ Qja4O5Dain;ɇߗ!֬tkե'$_I j-uGKaת77ju(KzxY-ˤ7\SbO'+iZL!\ԶG~5XIZ%߭3Zh-R>7=} K6Y%6VSu϶L\qgmqpDt)=!Vn6z} J!ZdczB1Nx,"Q ZcvdqVkgzF'E/cҿyTdx} h2r>䲊R3T{WBZTДsz^\sZ<Z+uo\r]>9>i9z;U_pi+R-y&j_( %C 3~nrV~3 )`bhC=5LO2[ƀ'?Rұ`+F|E)L`%QL*] T6*^dsaDa0{}`r)I*1$)˜6hЪұ97D/O֪څĸxfklY.&S+: *|_JYi,<]Tbz}Uު:ㄆNtj|EY2(Q}yofxT {EqYqEܮ)C,":N$a4`TY_\2f^3Doˤ5eZ)⍲&7tZw+py<LR(1%dcvJ^IQAxL3fBL?p,iuO9WZW PHM}rۮdx f[сZ$57J!J[7|JSﲩz"׋E~&N[םڹ7d!,Sd^|a}lܫ/w`ο8+ukv|JJP$!5id&[xʫ`brBX*[L!G]HsS5~>:7) <&ex䞷t7WFdd} iX` uD=a,|%mvsڡ37gnʦD9e "I 99Dž. ".z DŽAS V-K{;6@&D5#JւW"jҲHy&5ը°icX2lV̐I''S5dEܱvFJM)R] IM}lIMvWqw׎(_M|{A ZcDt9+a5ȱM)@2>Y鉝" j:=oCƠ;,!Iї#`U0:3rnGtΰ3θ/X腗6%sU3n .Y^ܲly/oxi)e"Ir ]zx]eMw&"q+JPR2CRxYC۵).'b]VMmZ\1rS=&АA XfP60Q7QD!RKtV̓_u`Sʶ7b:;,f,I*Y;މuvyLcȂ@V*l^c'<6 UѤR YXs 6o)*!92%(P#]4XBHVhޗ+CIښ$ka!_O#Eв5'>Z# iXf7Ƚ u+Hq,*@RaPLƠA*&*0|Y's:JzǢ4f#h^kfR5_yu~3`7Z 񳌧9W_{~ͷ5:x۴7+?ΕƩ 3XA:qPB? i|jh:{/{'bY@8q {4յ TqBPTS(xjېgn|ظ,'Ghhł}YLxAx?$6 ѼV5(tpp^z35lHmjE]xYyCFi{.wԉ YCi0- 1Saz7Wn45l},Yf+e~߼_S jSg N޳A߽~W߿=_7xeުZH0L¯#<Ϧ0ujjo=zAvԫf^#ys眞/G॰u$jMh]&d#6߸d9%SR``0db(%2*ϴBua'Lo#=a .FzEڎ'NDAFdsSz)H"u(?Lg:rv `b#f{UʸΎl[wŏ}]<(S$V߅Mm$Y&;_ϢTdsh ʅdQX˙H&^M2z9[XW'(hNVJTAZ5KN"QU ur%@0sݑ쵒6[YfC0(ySJLd*F 2BJ ϮP6O*䥫M|D"^ X$ T[v5[;X\DF&53XNZk&iCcJeldlF^{YmFxwgrzAʘu1>tRAg J*aͿ/_,}{/~OK1~a'w_j=\gp}QRnw *ataӂ墡 ŔY2Cx#g͆x5޲KO⋿kAv?b\LlkIμ{S|E,_|m xnavu0yQI`9/MzZ_,;d`ޫtAaWMrހٻFn#WXdHx*U*Wr9tv*K II+߯13|8R4qy`h<@?yx.h3g&6l~MOw=+<7΋RoB>ΏG6eҍD [DZu%xFa'TI&KpF 4Gѷ[:oa+GK*՗l ǴƠ!ҧʀZRSr m1`3➛̽"luf_&M\̢V0#(6cF c̒5;˜l}@י ([@oQb xon\P=`kr ^iGjdY@d.*5ʾ v5uw螾oBg *'P |ڽ*8}Uݿ/tV-`ki]Z7iREaN0CYeQ^)nq]c`L\0sHg[E?rbS-zSb=jfH瀵L/)8p;oLp jg>sWL#,8ZI4>b]s{0k|{C !෾^ ]yT]9mЮBWzZJ~87z+Shq~=M֤I'1Kr3lP&q~ARLZx[yoQRcΈIPT*sD9;6r.Rj)<,l?x|j "h8;4" 2z{ny8x's[Un-:zt$1FӤ>a^8ciy sZdD)1lf%٢t vDXHɣA*LQNTXK"!J&"K >81 I-o JqT0L Q0A\a%KƦb5&vH $ڑhrHM@3G6` UߗYo-y%Cfq~;DnUP$&DBgFO% B uch^&_ыok?RHz{<( PƆ Q l)s /H|"mQ`#(,tbO APن$K [ߦ?4{.Ժ=-|%r+$8~$.g¯Wx 32W pY>,UwUS7WIʥ5W\%i\%:s{s1W 86oEWo\\ Nn^$d^&k;/ԀUeR*}Z押\\1"Zp[>#\Y!?vYqyu⅑_"_ESهS~g:|2ɪ)EwN@qM{wx;2:Ҡ E2pL?o L˅~\Ϙ" # v{27Cg7+OgVY˕FO O}֊ }$(*IMl]!KM &ceX3{zms^rH( sJÃ&ݕs8#O. |~S\Ws䒴 'Դޡ'G,0쩛$.bU\CsE:s\UVS7WIJZsgUҫ'x?pj2v0IL*>z|w'eAcLirFfTMⲳAIZN&)e{<.ͮDLflvUZ`񾃕烽P{.?РB_/g4Rut;KYW]m ^ )"\`d%2IPED#=!R 5ϣztPlUiԫm%>޼}Xyn'353=,:|އO@h 01(Kz YF;Xp1&@LCPQ|%M'{C:Z.nsw5[Z*-MZ1Z[lcILЛu|1U :럻!\%)|z$V-gz-gyGizly=[^ϖ׳ly= R׳ly=[^ϖ׳ly=[^ϖ׳ALE71قзQ=Х0av@ΡR:J*E?ѳ7PY08!ruѕʍ[i ^ RQ´jRw &+n/-|M0Ńab*L)3E(sH 2S6w;TDODDO R҃FI5,t-yI&S&H"--@b=?K*Ljd<)V2""&Z0EV @ֈtq/o$;hܻ/x>|ۯnͭiMѦ0ɠ)a &K$1FLsC!bM{tZx&72^Vu"h@QAsgт#*$;A ,ix7&gu<"(xv oWh=W,Ǩ:w}x|Y 79*.TeI1if,و>z/zsx#4H-8F9Mtɀ0–@a5I\P)Tŝ2/RVa`SwZ D佖SpiMVHKDø1qjzm~ ^`:~cA7e70܍_}dFsM'p%a}o\mvK.\h V9Bnm2t4ILBN~V u1a+:Sh{dK6=~nͼChAˬng=/odRfO7ucilWYvwYDdɱئϚnm.%.w?n}&|ww/C/ynoJ`7OQJ5J}msIb_#kF#v,P րZn犹` ! b夵!0/(""$]. l {1 @hIWdL JS ,qy(ti0`"E ySBڧR'%bi1g;JaظZӛ-|:^ ^h=^-Q W?aP a.QݘOtH~d l` Tp]T|fiK0+"=ȫv/w/PQXik^ڔtht"hCѸpP~h:Ku 2㪎vO%&H)jAWu^-w?VUv+ku;8㓾YUߜ#QأQzĤKIvRVtIb-C쇤y}=thWhu't7qƓ%Uz ?1/``\jBIzrQ 9U8r}I𥸳cZኑ%|H0k5yA ~%b-tlt":(:PC{O[xE˿7en{ռL'_rW^\_mpȁG:SIE*Yf)T+Fռ3hgДxRI)\վyW3źlbqNB4)h#!FBʙmw}yC=G(tcmgYॕ HqID:1duH0&yh$0=98iw`x옡`PJGM!ĈiNxzXJFm CRH]\k1zI]&az7UR4tܘnQrг|QNS> “UgkQJ93QL~0)v8~K`8'.GVY2~RŜ "L wYe3OʾwnX$v?A $V[BxnMWp'<{1i0;gJA嶡vq}r):8;]( ߙލ~|:Enl҈:V>YT6vw㹨S?uK!q96yݛLǏE}Kgo_ p$HNd -F3 :1&c]ˑp r$aUNySqFc"4XHYS"8ᚽ!cFY; ZDl h)aa1qv $LoG3ri<1ǛC>]ntaN̎3K"->cEiWr5CثƓf2ɴB0沇?4A˭LemFgfbā,޵#뿢yh->fgw3NYGXdSlY%ȴ-' jU+ =z?llp4 d6 vbXCEC} FQo2Soәuh.OVO:-8pA1c+TJ완ll($팔Em$T(i}n%:8o@x]STv^Lq2wasج&U>w`V I;0v`̮Ui*=LLIU>SկɼYfWHƓUn~uW{¯"K|!'VWq&\1]<99e=RقG*#+j2ٸhXɁ-r*o$O\\L݆[ѹN(П¬鈐ecȒcF#&3Da)bXB0aؿi!k&S[Nkc"W۬ʾXW+%*If'"1RK1Zľٳv4.Z Y d=q)k"V֝[r5oڮD.k !:&y'e[1ە5%$۩oTeX.?Ȥm4QAg0E`|RKZFs$P!Bb-Gz_ :'S4~Azʴ-y\TLlBp2C 5V,k߼Z}񜗛1c(kxleUFɅYY.%RM[mXS^K1T*`Qi^顦[)-(e}OE* &ouz@d}ueRv˭GRU@VΑ#+WeJ=MXL Z-[ۍsn64L&XΧ]YY 5sA+YK~VWb?aYk2拔|˳3?ΕL IMG#Zz?3aPsz|(cvp: h%2նbѧũw|sW RUKRUY9"o.e2αIay5#_5 Qwc!o4CyCGԳiG#2^1۟W7Z7dNbʭ&W"礲VkRG;>yu,^.=ճ͵&G?g~Oفنͺ3y=:|fnm:#ϠԶ-֌LX EaEx~<'yHO8o5ڪQ׷zmn{Vg9y#a'na:kf>г}M/0.J 1Dk[ܐ1,jF̖OrvWW yT/#ΗgQ?|㋿c߿yM|//^2I唋LU[sy;th:jZ[5m#ܢimz> M#oioWu!KzyG e[Xv|0.T4޿S lZ~tM|GUs͉]ͩh1Jmh(gVWւ[)&ΆqF=O'SBJ6RY@sJP2B7M Qꬲ(pRHiM˚^&;Bo}v]孁;WlUSX=휻YVm9)'OXʐfrFSS OMd(5ylr :XeON`PNXrZ\EGPՉAt2qWWTM*L%!y Ɇ!BZUjݤSE^gV32u͠oEJgN\#b .y( Q!;DPmuR* jqA%4e4D:Q1jTb3ETnoZ\X}^uUE|aeױbi>lr4AṲiADXaHu=d/w֫} u p ꢊQ+-|x͘ d[28i0Fyұ=ұ^.Nt;NErKnvMGr:?Ayrg WuqF;dkpZ )-w筽bWv[Н\w~Y"SZUNHmo$,*8B`Q )juJ0%ē*A4y/~y5gp!jDoK*y0D`F. 2'w%9UiR.{޾:r4#ْ % 6/2(R v7oSZC|=W|Xγ..tfJGBj0A cJ܅}%V9%AZkzt﹞B@m(J4j30[)WABSr&9ֶm^ 6l \ǥ\_bLG4̳#S}|٩\&̏2#oK|ks/_-wMbu췟hvt9ܹz-_ql]h/A/' i= }BՎnIT;n5T;nq߫^:3U;RpF iG"cRa2@-H ѡXC4dP3@˾'M/>!Rb]!tB0iՠbZj)BR;R D P/%`+mB\˟ lf;?\:/l]d'fH4Yg}[Z7 - aD`15}KAP5:H0`fp194AW6{}rjՁϙK.>XT6F8Z15CZωƤFebN R}7Nfdb:1Sm39i֦*fshjΡ#ݺsˀkM>5'ߞN&+2LAgUAk5)*SkHYe/3"THJ޴8]V~wFEI#hqQtVnݹYQu'KdX.Zл[nrS˨n!Әn1Wo;;l.2y:~gnS#Njho \S**bn&|JkJ Rq`M\FkŜhЃ]-՜UxZmp ujʆ) vUToح; t$cW[;0{g;А xŧE9=Zl <::xt8-3lѨ*[J8'C,OrG qZlmÍ!es64i}( &QTL@bHbw;M:jjO:M5bPu*R.:'6T( ]\p6r| Yar[C7lbP:REH5[wn#_mFjq_,b7"1MqI|>ڂ::*0bP* ZtJ˘+ ]-bZ)/jfg2R:X6, uݺsoX:]\ gʵOnR]Ժ]j] x ǐk$HgnֈmYvLX|!.Lws;378t=Ի{0a 8wy{ YFC.( [Mяڒ=|XE([{W֝!U+Ohǵt 7]<דB݇EA' <qJ+|lq$/iWc2~`j݀^ HCY 炁dxcgmI 9Zr;Afx焾,/rq/E j v.n-tqMκ7JG +zذv.3T(_Pİ/!|){%]ϰTw+΍;i.Y\0^|b0na0PLrC~\Q%sG bϳ:@=:{qD~sS<ȰQa Jn!49?K bYN)G I[`T9F$㑅H>HԔ1т(H8F[>h'ָ?^Ib氋~^7]ZX|vVּ>ߐI#d(<\slXM{tZx&XRv;y-Xhm 4wFm6*GT LԀREچm5x x2bSًկp昢k+gӜJVNJU9c"/]F>r飅(g8FTAX0B AE.JTmU} Qc8wG~Z܂fre*;@Lnem:rBMrNT?w1x&:Sl1 "@ 1u˓uIO%w#7 3[ENE \+(23╋ CTY\wZߚaͯ=RBY]hqHmL)j;dKRF<)p& xP;{Uax6/P.s2`,ilL0`8`NM{(TׂraUq/F^,?3_ߌ>4f/߾i"NMsìufDC*Cq\ިMmu=dI2\Fv2}mCzx91y,OD@ gA/{?3y;-d 5@jBti"`d5 qP6>,ԦW4M?Vܠ;ן_>B٦o.­fu2++sbU2=Yz1~^ו:dӫKjm4 dCc lV>D3̲^0`&~\[]SH!Tª5W?n$[or#xR#Tv S bs:ieTiiv`vL2IYv_ENLm X:p³ [w7#0J`=FPK+9wAr!Ft4#4cɄ`Ln{S]Gկ:jKw2ZWOqvtt=#r uV:jb !FLu34gԃV+5 IM#u!z#sk% t4|@qkGv#^ͤz>.?MEpm ޸(%  3pPh c?MbZ@ ,^FVH8 a!p<8` CGOZ#0a8c"tkd :N-9k7BGCJ9ō8`1waʽ' oR0#4'(xAc.nkUW5;׮x71Vo}:;)mA?c,Yq;8bӺ,FNL>R1 C.R\~y6@ߵuk9-¬Q#Bu8xY,͑{pdUrGa6GB VG13%t$f*PoO:h^.{mp h}zѠGEGh DG'^<RèU )Nx Lgֻ%݁Kk5Os+(gtAUCT,}=nbg]>V9 ]n:"HH@B n<鮰G[p2TU{*eׂ~!^G׭ƹ[RWp-Ҁ,:M(T-^;^GN_\Oڍbې,k)L@yM>jb w"T~[]lNN*mx-k]Nޱ[̰ei4]>0qLEdXcRZ7|b3B8{R :|8^lP&?vsH{\h5Y?yW!r0Q FV&J@?F"0E`NSw`pnW0y TE-6`M@ZXXTa3 Ni7gfL?/@ WX^o?V}B!Ҷ!a<]=p;.7=j$g,ar v0yjIYlQT>PٖU'"{%h{ zm+L,uzC#,ףIV:ld6F!F+pna &? ԥ[W5GqlU/lI?9X_Jr?\Eכ\ie$r ':=\|J0'_U㻓_ɻ>`N<:2 8fᧇ19]붺ꮩb;tI&|~yCnr)}aB9NKgZ{H_S.#%$` ն"ԒS2)SH"[ kNWTty29]AOxoGwG7{Hqw e$qI1\{kEN$H(;)%gg]N1UcN6}Ly+=hq[f*0fP ѹO\?R22_z y׎@ۂu^V 'kk!OoE!MQ'7Y y60,Yi/PŇ)8wx*]rYj)s@6W/(FO' xLThNIH30 Zd$xN_o|雇W3,P/i fRB i@- Yfų|`):pѱU z,z?-HFRQrT̳{tlg$e%K>[NUYtDUm䤂VҵfL̊RD|V!RlWZmG`̅E-g5~h NqN WfiC=N_VlwN2Zd'j:yҁ-yr?B@ $Zm eL ɣ]֤;Ί\)o@J>ojK>sƘr,8bRzHʠ(agp'9~$L ?U̝hzU9ore 5'_>u8S=]3[_J|Bdr$m`ŴJdt*:rۋμov)Mlh]z5}`fc֋Ui[gor;Ho@ zhq}BkSwydu!gЀbp.ify)rRhý[BnۼQRɥYWTK=fa|ȤqL輲`~!aWoy47MCA榩o`Cgw\v)񏋋b. ZIOO+B{r3yZR0m"/peuӮyU GP}.E\TpmJ^7u}XpsKky(I9Q2es& (b(CB霹,ǸV9kdNR7k3C5 J q@c}{\t,Ž=UnzSdc-Mk} ɹ\2̙̱ ZohEhp.1b(.X0Ԏ!x ƀbĥq-!:4`SH݉d=VYXf12Cp,9hrw; B.7qPcTϞd>>\Ss5ȗchdD4#eYMעGTJYSB/"I[:DlP{놊 J;"Gm+z(wo#zӓzGJ;bq&] ostX̝tr񂄕8%gǾrlsP CaHp R)i(iEgA,8( ^Q[S!֦, RݦL.1ٜLk9W3T؛8[lW {ӌ]bbBroo%Ҁ_.5:{Ӓq~q4- 9DqX֗ײ]O".XA[Q=78ŽҎ}C>-_ky^G$PiHxs25 ?AJ0ߋ0)J.N.:4$Y%!΢u}G£fEA'>(nHpHMtlNYxLk :@+U 3[iۯ:'^tvh'j~0}gr={J4vɸ2x7`sow\,' icyx=ZZz&ʙ/G5r˩2&SR7|팜y4/8&`J@&f5dzjIξZ[.N/,m \h0}/%g WI y`~RN=`ZVy1+t=9Hwތ EiS>czPS׺ci<6s'8n'wpiesV:z̦t/E|׌Ϯ٦Ýy3UFe *퇿:ʢtM IK)[䈨bݱ[ppHgɭbSJCX?BYu|맰F2{@pU~/*y(pUunኤ.+\@8 "97WE\;\ɮWNW`%K+..͕ZNҕ%6ŭʈAKɏyJ,blQd똇^%[ ]2'☆;zPH2w) n:~ffQB9)dLThrH;)+MG3QR!s. a=BRӸN,~t Ѵq87;S:;Џ0Fq3))P/vّ5zIiG_]n^!Imx9!!E`-(⚃yiQz݂"-xnbm^]\2ּ~|ƃzY"-uNgBtmq@J#fBqzq={EO0K&Bb+Dm6,xpҒykd"6f$$Y&$9ƒ@o%rzȼ7q 6"ǿ}jfwi:?;\O/e7-jwe||t9 Q\5PcI2X)O':'HMRĔB^.뚱cω|ߘWͦݜfK޳!^S} oލ1Kcnk6{Į\XMzx-Jj9K3{Pugڸ 2 |.Ghuq'l8NJ;ӥj;>(47C!HО2e'PS't@0)Qࢷ=^${ALmح7BarF6rR.VJX)+b\]=D\IJX)+b\rR.VJX)kJX)+b\rR.VJXQR.VJX)+"VJX)+bJX)+b\rR.LRޫQG'7mZhL 1ٖG>[r~sS[n:vdON1Xʞ*oRt% h%=xn1Yy2MJ=ൎ+6GK1 1٭eNp) QdP!l,:PYù!@OZb~^w_{pMxyncFu,xi-F7NIJĘ<|v.h.Y}fƻϢ.MO4meH*r @-BK(W!^+^j]Huxԯ!X]ds29Lؤ3f ɕ>LGገB">(ܼf %I拮|Ŀ6El(YSV =dU2[||ҍ s/{?H482 "O9v)94M>H3voyԽϸ1 +vh)~2cBA3׋ R노3=v&h&S^fRf?Nss;::==\^~DJ/Ti|:_֭:0gnm\:T.ݨ" ̛=nڥ>=}w_'g^ucG:䋳wV]I4oדn{kRý+`lHo .7 #6cĸkYU_h>,O/<$m'K}$*+&`*cb^"cI,7ާtD }VZ{վ|<#`# |>4$ R:@2W3 U*)Ʃ9#g\swChkaԙH"9|ee2C .z/~"3E!vW/W/^it[?y{i*tt:|r0zӟ >v8?~a6ۣv^QaTuiNS⋚2QV )8l,/B]ى.MDr1$_FwΚ^7-oۜNn&y1-_ce#چk0v:\>5XK0Q&3='=^]?9oQz^F/{{i7^z6][ 9׋-"ڮwm9ju1EJLJ&/ 40nS"zW΂ur=#kRrޘr,8-EVeFɉ=%g,̆`d:"JC)rV-1B sY@ɄQ%[aFz=dxO!^ܭeэjqYŰ)¹ٮ$ty^M2'UL'LCVRKA\M#RGoNu ;'Δf%Kr_m >-y ^+~1c!o!Mq?NMU@2$M?6DҸ*!Dw"Ș"d}ȥ4r*__Ho=-)O|5"IC!S Ρf AʘF a"5цvx;×,gP@] {WF{ jl;HHm!!Bʑ: HJ޲Jj*]*ڦA-UV]7#o~P3h7fA%7!B",A(Q B<)iMa |mhRP> T08%CRE"oc!F&rָYo# ]]d{T4ZTߪm\ߧ}x:9i+Z Ҽ >:HSHvB-'34>XvKiJ┊U dh`%Eb4r ?^|!Fg$p h5w/ !kX&XYtac텍-iv"Yx1hka9lD"Hw$Kh ,DgIJu3 31[Cc eiOe.{~2 m 1m$DsXBzQ̳I$0I e̲[zjA1ʙkv#-DCH'HK388q "J-tByhE*xUV-toD@-3 ~=ny3=aN5/y mb4ig| U ӯŨ*$뫠V%#w]j8ENDVhig&\bm}UP5G&g=)IH¥B WKMq?N*dw}<;yw!ew%nءGBGaGGiY>.w`޵(/VTҳPx2: ]̮ 5$2;D%ӝVJcA$]dBRĒ,JgYbPg;X+`7>A{],ʁWZgk5 Z&4v Z@A+v;HxЧUSm=hRAG]e1*hCxIʦkw`<2`5cۊ {j,h&b"xP]pus %'b)ȜK:{PUHTtlr:CBw CdV{S%" [#gX#  ~QK7+Q68˾P_rk& .h#@>׼`. ëW骂GH2sQJKet|6h<ӡQC2Oa匒U{^u`ydQ ia ]n.7 +@@&? X2%[i R9 L>`cLBB=OOrJ]B0!acO}|G͟jxɸ,e(/r4DȘ5_S\5Gi{9*#TaA}4xVZ15iM&d !VqIɧlR@2%`!82 љ TZ T&ԴFΆc#8?GK U}ЇZwMomIik7pa,^q2MU!vmzg욮g]Φu4kKtvoc"W6)u {;yIdO-W_ZWMt'k:]Y alRˊZVպyVϗw>;۠+-W-o~wQr̛y46K7uy|uM3FBX9馩_٥I۵_>W]PW4N7$\棋n^Ԭч̴Mh>XQ1/+Ɋ/vg}.c^s]9BٞZ L& ^JměY.eFʬM>"BP*ƻ3miכ30RVn3ۭFgi`gNLK;L)e3‘4A1\$` !%:d:Pl?N0<=.!/.2%i\Za\dJ0y 02 MXDҖDhSp<' `(0 N9 [w+6NJwwVjޞb=`ŠK3Jq2[i5F/rdGeH)k5( tN/˾ vh<}Y*z:$"jIrň0͔K$PYx9qr\(-;ӊ)pAY'7;-qȒsEXg|ZזxКs`21[ @NO(,:o1}ͬס ZG'5~XOgs"[KmF3n"R!'̅$d6%s\pIqG 97vb}*Mڰ<?/5=swsž/iB$E?Q49v1?=ih.J|h|M{oFoӔ~3:+ZV29QytJxi|6JύPo猉8SwiŚy{^tRUL˹rEEP2 \#~Nޛq=w?贸,?$ܪ[?5!xiNf%c_*!QM]tL@So $_'C_?xчa`v ˃+V2WLgyr|kvzRɸzho4y>9^m4n0e>{%t^kΎ` J6QӘ=?yĒg1ŗq$_^':֤+W.r̍!,Y?eZ׷Ф0ͅh$]ӗkWsIUʦ͜ڞ7Kgˑ/{WwQ~!=;+hyhm!UtoAx$pZjHrrlW}\U?+2 EkG$ɻ4?6RJLc+7;Ffގ<\_UeeY|q<>_f7x`Y^w^ir=G_'\&"]] dY6:9IlmLdvުatn?Kډ.-?i[WN/<$7^( b!o=KΘLX'AJ(̮&~8:qOv,)|}qLmuOX2?9h*r>䲎"5l2Y+!E.34,A1W%0fBDd^l_>ui{1i; w^DܾOom/~a_M]_Kr׋ ^ZxEg_\qTjw|-pU{Ura^9\Sٗ/Wp]\_]e_[n<KbrF/mu'LY2::["-񝎐Itׁ0F=o?~}xg9@~iHH!hW3W~Lhph֊`GƟ΍ͤ#cYB<1uX12B?ӸX'!b0@M(Lh=z0Kg *?`0 b0܋ tGF-b&Y*9 ̜K3J1C!}'S! ,BL MQF *9,w2jbfuB:5mBԽX0+R~fvfW-I)j@ lx&wJAA4ۯ8R j\uzL{idrRJse`+qPlsNF]U )BZ,fx4FGUTOYN,ݮj=Q}b*]DJF#MQ:k*f e$hYQr39*aD/V:8نKV % @r6:yDD-kz1k0bR}\jgvVb}3e~Ch:U7]0HO>$u]H*<0$U T~s3_'Z"5Tw@㚨nHɨE=S}vNsRUTuV *+ iv)]xE<"ʪ|W̮܊,'r:ub㚰-iQB Z%LX-.n$ydjR7Π) QJʧ0tiۓd`2NW$55gΩXEYH>ɼR;wbM۔(lA$dN͚P(PD^h/9XRzGlC5EyᓗH0ǔ @ Jb1d]Rcw}gِ]ܫnN>6H 9%e͈AZD PD,6A;W1MҲo#_U 8M\+jgݹ_QF16glYR.<>ܕ/ɶ_lǹ_6x̒L.O,ml,5坷7me{6/d.+v/oZ|59TK@`6@9ka%kR12,Rr)Ƨ@JF6zbh+%uEKCgoo@tVZ4 TYwGlΰ3 }Xz,|ԋ2^\2-g3^}$'&OWF쒓ȒJu 9'`O|*R(t[M{]J"*SRZUe :v‡ [j|LjYwG4L:v}ڽNSUi䤍"2&Xr& s64:b9y#2[EgMIZgˆ#c"3&cxYwÖσ=+0 "v&"bC="=E׵bSG$_щ߁`bNF6JpdPSD ^ aZ䶜@3 SRi93VttrM 1"v֝!d^e|Iɖ(e(E=.,/K%id^$吤Ǻ-R*D<ŧŝ]C>SjlZ>]#mW\J`)q >Jt8Wi/i=:h)=xIQV윊.$PNnٿ> CC3NVnGR+$Dl֦`%tK_z6q^J@ C,' Y$eU. n^`fog0Jq;M-=9 /)䧹Wl?[XMSs!YKkw,n Ք@}[GAcF&&P"Xi;1C6{tbn|IEm1+gI9gtx%C>;)kEVT `~xZC(#4/uŋ292%)P$h L^+C//"5yn4=Ҵ")oh ym@jT(֜Nh(/.,:n2ԭ"mX$C6JR.ldJa P1QǘUp*g j5ʘͷ\ 5bX|i?|28^V>??[֋y5E~m`돿 >L~̏S]9Aq{A0Ϳ?r.8!`ԚFLgR3žLCFDH-ީ;rVk^L$ /& '{4 ooE`*A"# k܎ᬠa  ͥ}x?ᬈBŏt98-W~- 쟟/Ba f]MchU4ۮA,vF_:F^Qul 1ͮL/5~ٺ~~Cfo[6茳vmgNQ}>[sd0~8oܿc[>ks_Lh0. rOS7]? joOwǓ\..yq]p Nxm}W ղ6h ~`%3/pJ/?|y^)R FGwu<ǻtino޴ARu9ߤ]W$=` !^ڍP{Um Ho5 $@7Α:qÜ))di`0dHbV3*$ٲzazΆ94|EڎhNDuAFby |[b IE2 \ VAQs1f );Jetv3k]#ѴdEK+ΝV"mwط=y!~0 ǣ?|Usz˘qDg H,PDN `6KSA6 1֕ [T(AkL2  eщ(+ lAB.b0l !1MfWm^ TK9~I_Cl1>Ѡ>hpHm`p[tm68}5u_pq+ByZZhZzK-,-VxHs+kV5KxJ#h:L!uP"4ɤejњVWeuWopu3`kEbTgm2F:v0cS@@b.E(=!!xoz[k[[}fQ7׾4)ѫ qm}Wt+Y?o}+-vB-ԫ_a9pLT{)`{gJ{8_iK0iؙLf`|F2mpVwsERsiXRSUέVpOo |J`++zCY JE.^'3 #\F_X~5ttW=>_~z$ L< %Yt0. dQ@,Y1O݀3_ s)w6 U$ӗI')Z~t0;ri?_E.ͻݲ'ɱUGRGLJvI>܀NypX 9-r͙A6ƒEiCV=E}I"oI$&R9frjZQJ ΧM"*dCRK=r 8MtMT#À Ƹ#a*D h;68%m:&IȽMZ| =s!;)5k V *fn*I>>\SYԶ/9z ~V uLJȃQ(gBdcBh.^71NJϏ(>i*S ) yyP R._ t*UkQ >&IYIzzazCN2)Ң ݺ\k+>Zy )`*dѰWcgmI 9Zr;Afx+BG`^CC,@md͇'vڟ1{ Kb*m%Ձ$bX֟ڸ0`죋}[{SoXG` S\Q%sȧp+v'=<]艣X$6J2aP'L#c&Nc R7>R/ʱ?)V2""&Z@+C5"]^mp Yhg3pT좱]f^s=i0}Yf)> j_}Dc$9ל1]6D,6b=Q d `fYRzUbJY e V@Bp"kdXPܞFu ☃<?u Cp?Y/ 0YØƕ1aXD`Ţ=g.8Ť +GkI_&T[0-{fɠVJ3~1[)9]z4v,AP l@-7s\ i@IkC`^8Q-#R_1lTS{M0ɣ/0bGbЉ?8LĀ%Ta2D'HkKI IXZYh6lz2 ߹X^1(y~Un1;ՈJu5Zў<0lx"̠z*(̥Ɨ!o(\jAk "^ EҼO=3MzVRO<ύyN { +H Jc$ 10Qc.X<0OR.!3GG=9fSD5XЯXXBpRXAx ##<}1XoJcv[QpHbh- p摸=ȚR$pUnpr}U 0c+]`vC4$dCz sL9} ÄP` L2}՜?#/W޴cCvZymwT]ySՄ#Uphټ@XkIG$hbg՝Rf˻ ̻Vj~[+=Ff{@sG`Up *C}{]xϧc:#T*xqxC56oRT1*qwmE2(qMd̗-)gU<Ի@ԱҮ'=v=zrTg|c^3J`=FPK+9wAr!Ft4#4cɔ!P-#cϏͼNxv#<TY騉11 \МQZɨaHj kIK=ū|&oƍxr֝a(dQ=W58-C)%Ԧ8P KJj.;:kݖ ,SJHVH8 a׆n(t[{xb Vq§#.Z#E X4XHx@ASǩEԞCAzuu4D{S(sVA-m} !% 3Bs4,qkM@ N`r=I\XަjT&][oa,YmV'V^0Fg"իѫ DPң%Uczw)בGcOPb٦G;hL3"ts\_}V&`t^Ȟ h謥( y>Mi(.R<,ˍ$B."!Qs&̙27<y4oUm5;fQɇ?'ζEtnAÝ7ҙ(єJӞ$9=&X'M; 98g`H" &qHeg@9^ ҸuԕV''*- pT<_Ev}$A(ξ|ݠAeX=~(ci7Ù^Zm7ۭVPFځ6g[ 3pW7H%B5F%KVKhKcjR#jj.E +CW0mi h1QNW,M-]]ImA +ˑl ]%WJڪS+E Ath9*JQwЕ\& 捡+qcaJhѯ]%tlo9rpz`LW+aahՁa(Eoѯ^Fbel9r6Zup_uK^kZ/&9aN;.vF2WiT9 %J"ۏo˻KIWf4*{66}oƦ8 7 Ajraf­ߥTS)ž<.Y< !Lc^PΛ^^nFB4H7`˱hnHh?v(Wn8A@Xz +Lj ]%X7ZzfNBZ:ELF+$1tp1MVc+@I1jqĕn]`A\Jh%;vJ(UN8g7i CW .kUB+L(nJp0i ]%7ZF:E!.DW_apecUB~ PLZ:AR*$uè9tIt2ytP.ftu:t1xWy*ХA3A/)l(78?{G\4opln5$]O6Vb]& /Ů $ **.q1gr!Cswzs#ױDC!kHm}_Տ}"ѠnKmWj }|Ÿ~|&gT_T{՞T| rg*,z_mItJ6ߒʚ[R=[Q ~i6~ ma Awh1khZ=y)*ͼ1~˦J[ɣFS»"ռv ʉ1ZZjrH,|~%;`~bYPծ}'sNmɸ:EQl(>ټ8%a}|%pkSxQ2c3fdU]hQXnnPr xElyokDui y|YP_vު_/.o;=8GBNGKC-(,6H$+?j4eю ~Ǘ{cyz@[{{O㲭JU-hͷtKaT=}Ȫ/򛩂ޑ=ǛEI9Ckto3{u ɁmFOVR#Xbv+cv_Ml[OO LVL7WծV5O7k'H?ql^sxN+,gɿyN.oN>k=hz޴vw61~ݩK [p Ո-Wc .5s*3۵&?)׸5xՑ~m]+S=owY-Wa!;;K`}cS+l{+̏QEG 4PQO yz+o﵋xVppNx^~n}W`S~+&o{brTcHeyE58м678pv6_7J0-%79N-NlOwnzF SjA&Yi3>xCIO&z5o8w[%/hdVύSo٭ XnMG6!ݡVՃWK&tɉ;Nn^29Q39tSn$3䲋 j#qeBPk*g\,cGlg\nܕ'k 팫1 #w%ru7DSǕs2xr4w%WK$6N>Jdq壊]4Vw+K\Cmt>N%q z5Jӱ\\M2(j_HJT쌫GMOS"8+\q JO]J;Wa˦J)%ةC'qj޻rjO\W;7=[er+hT8̅3eN]T!֛WЌ(rW~ ]$\/j?,>\Gtڟ)]{Eb~{׸iڃ1MY;,ȗ"YkM\VE;B'|L!wDvQ{GT}=J{ DnW6 *IWG+x32D0np%rWPgeD5q&"m"ŎpEڪE>NWƩJT:7q4:JWO]J63W^s#\|58QMWP繫ĕ`J=M`Jnڡr\J3? gmMGA(M_0}oL7J2o8B`u+x^p%rWPShW"ɥ DSǕ 4qE!(pZnp%r}8+Q9M.g\ kzzVD6ʱ14q{S2(~(+Q{pTqu 4v+t7ozp%jysW2SLj+:AFS?\j+<q1SO G*FWPDe箾\ڮI*zkL3E[&Mk)w N^2Y0|S0qB6aI㢗1e[Fa="ɍXop4#h|J*a 3kLnl;K•ȵ\Z7y\ʵf\MA|)DJD-JTN-Wo+RxWlW"v+Q{WGf1 J=JkR]\ jLWP93^ WFvVU>]VhG5'pŅ$}2$m|2ݔwwB:[~.{\h??/Z~+r(۫֡.Y6 KKI})DJ/w(UX\|B|=9dK ibo-o.S97-nD|+ٮ>,t/HW]νuw1{zҜvZm_^.SWd_n#W#_T>]1_;Tˆ^8vu8}\58Lc ݡQuF~w^7yC+(HmQ?g:U Ȗd?˱%C{I2YćjگOZ}nDWw[hWq/7CE_#5rn *䬩I9KEHWy\;ݟkIs4PbJUY3.)՚ªiTvNXixe:`}BTE@67Q%ᣁI:dՂ2'Bj)VSmh-58U͵1"r QcNCNѦjQlc46?}K\XV\dlr-PtMSjJ 1O",=նE,='zΌf 9#bf8:BE5C))jHx&5"~rk@@ۘR:WPvl! )K"LEt(y@S0 ="?#!x\TP2{W(n1D( $ڌ!cyqJL#XH/eJ>XA!pBYO 5>"2L"Aa6p c/Z+gMldg.>J/Ԯj1#.UUQf,"acLź*)tF\xOHZ V< Li0^!Xl5tqX hr@{N A)I wkT<;`d60GOEVda\?F}¦E*SL d-BةbM1~v?W/.ݢoc,M%8C*}2 _{`#`KR\yQ^ B T*w!B<(fvpX X)LERgj{۸_/ Ko ɩQ$%'q/p$Jc9Ti4sH><|3Dx:Jm6(ZD10VrDŽ8ࠃ s$1 M-]`UIڜubm[#;-]44>p  I䑩5qVإ @$̤8L&ĄD-;䍁"RF is-X`-g8k^NDٚm`y!=h͢:siPJ j2$޼Ӑ6R.Koe^"vEjƿ݌0190tP_ϻI'y%T0VtI u0n y:t\L,< Ӧ_]L泼\SIJd9YmQR7i!`3=0J=4';j huZgTkb8CV,r4v fHƸ>4旻[C{7?h |@SB#rHoFs衈<@nxМCq!t {Գi:@2!SImX+bRhGϲ-4rNiuȍ5RFܖݙZYXYZdq˚I<(d "n1JZc@ײ Y#9A.uyQPCYxIkJ>ww+QPڠ@n `)1#0hYiC3`2.9%a F;5>T2jOZ S:p8˲ MUВJnUK …`~7X1RhWK/.c,hV",/(Ȱ T\0Jkl׹5HDJX0B3:R5?®zm/gn+n;/W[a\6eQUMd4|@x?oV4a 1ƶGuٙ[^V)jhrhE;~ۋhU7”4glhrm6^Kb\Ϲg #i>/[V8^~٢i9]l]t7ۯ>h[7WG(u>7y9z/l90޷&tՋk۱cy6_'\]mh!5:EPYj7"D> |@"D> |@"D> |@"D> |@"D> |@"D> |r`y> z|@1>|Pn G>y|*D> |@"D> |@"D> |@"D> |@"D> |@"D> |@Q4T9TJVU~>Ғ}@6"D> |@"D> |@"D> |@"D> |@"D> |@"D> 1mlM> I#"m6q:EQi|@"D> |@"D> |@"D> |@"D> |@"D> |@"Nzžhϳմ\֯sNծQ_ϖϰ&ےq֩zlKAVc[215xےd[:X"*M= Z?x%PB] ]A*ʅ \kVUAu)ЕUDW읩̨Z hO(]wX$]yUkQU{ (!Z2t9nTr?]زc]W~hv~($ AWMoT?[KHm~+Pn1;}p/ڮJ8YfE鱖cg矾ft5_.M)gpހs=VL2mݏ^.G^N}9YEvz>C׳tX-b? h ~n)"vm^^MI=kfGgyʺrnY,k-L./mL=/M@7?I5DۨmM9gd?SR6W6[ON顧 @ 62'+*+UCW=\_ ]Z JCtut%UDW|=tZ誠btUPnm":RVDWsZjVUA)% ҕai^]^VCW˙ UA) ҕeʸĠ-o'*p}~h/ JC)ҕcpW]🡮EW.׵UA+?^P*Ctut93Е2ϧ'ЕxdӋ3%s3^:ww?۵C$Cf{Е zr;=vmp3bj,Ř3d(wL_vpfrNj4oɋYY Hcnt9j:ۣ\?|V.nmIΖ=cp;L5RʶQƇ5Х(/~|oYOtToY!+!i6%ȇh'tM+.r\oZz4zyUӺ"%j\+d-JUrJԖ *9dEtUy*pU5tUj;t*(':A*򚲫XCW ZNW+N^*pUA) ҕA`ꡫ{ 3J>λO3q칫#]VlPꁽNAAWMz B8ڱPUszGe[s@|/\'>W7gekM-yCAӦ"*3W ]B JiN2ƊJ*y=2+d9ntcӡ+c"*e=UT-tUO"&OFj*+>~Zk+u|tUPIҕ&`Y5tí ZNWj]MsW,n/fv$]9ÅrU+Y5 cwC94.՟BW )*m=bu*h+oUtc##m%jt+Etwy;H{I܎k=VZ,]=]1W5s*zOR Fex^vR#4ZsN-[yR}8zkingEjTP?\_.>+ݬ٪Fzy5aMi1[GLyeXƋG~ׯ_#] O'G6"4{b]qݴHd01u)|w(aBu {W>~X۸^^p0ӮK ^\?;t_["@^]\Gz:O )~)f_?]]w{ߡ'໿Yclcb/7^y;6߽~sx5is.mPm22K: +kcV*G˕; 9y Mtw uaI<^605,.f,2ãҝqLK%LDž7"(Ld.OlpJ\vYϿVd8 [ѳUbAW<^"(-V=y 2&~zOg3w/w|*,/67tq~/}R2~}:XRM?x}u"O9|#rs\D!|r*$EVj0]~. e;TgS5!q'[' IkڐZtI)ZډU41%esO%=VNxJQ GZn[UNQ.(8p'>/wM5^,U􎭜~u.|)E]w56_@c|{qƍ;œqy톘\>//rX};O_~pѾQǏ_lf^,hO݂^faRyD\4L is{1<ՇeAtA /o63w+nt\^.ug [%qe=^~٢cғf{:|<o4B#lory1Jؗt2uС~VSb'nǂUeH9v}xTRR/Uݐz.dy1ź;G+ 2~=t;ds4 ?hu}P;?67XNw5w'iu,ݟ\nEu*wtuX{8N޵T&p吙P cdh O:uh-S2tmNZhRcpq|?T/#jâfb桄$̑$^lϛng/]6-rvBk|‘mO4_/ce4%K ,gr8KSO²[}vݧY֕/u^3IOYY4OJR΄Z:Bu>YdVZњC2y<&κ+"eN-!'<~08_ξn'/I"L#\Ͷvr$yB#_7irT|Z4֏Y2- lsguJY*퍊 ='8V]J杏]PQt>zEWiNhkyȹd&,ZMI<36T+eYhoļK%ur08Tܟ SSSRh2)(yV$0wA߆},ֈITNz M<|+BfrWr~uN?{F!,ضY|@pl3LfHcmd#yb+~DmNg&jV7W,M$8 QKWS}Cһd2a Z-)omWb45A]1LN;6,x5~X RG`kc˜{!R={ [s$tkoڪ+A}87vƣ͇TVF&*!F( G?MCw ɺO7gL@LhKD1`NuJ+,m iЬ:xK3U;Te8jRS%j0t~vgiS].>~ |y4sTC,9^S[pULO)W2eg 5)Qro=e:|lI*?åV^w؀o{~HA{77]@grs= }$ ms#J)$DD=@:fD EErZo)N LL}diz< Ӑ4/x8;Zu^ΕAy[i01 >x0G\+YQ .~_rCۺ~7O>w9`Ve2VXMFgu.V,qJda%(YuM QX&֦>/WEٜ$3$|xY IZ jwa~x}kqQ~JhL;Z=hEl D'5`\(%cBHM #ǥZXWGQ~ Io"Ig$![6Bh9h(_^s]Xu ԭ!&BA &kmbPCݏ8cɔʮX| Zm__C3櫮N0 B-,uF}~`PlCZ a"P2;{'bJJ{8q}ޛ(4s1&!("%< iZY76LmrjU|v6aocO':^>-zC0}Gt/=ZU=5MS'jF Fgڽ\xՃ5կ3ߝΏ^fqʵ0i8.ợŻUݞ ƓOvy+Vb[K斎5#6yeyFi>^p~9[[VWmjg6=qwZHmذ8z &eP>`o~j!0Q{[~:8?-Yͺq~>%uJ<΃ʀdz(4 TRxT˚#NjӃ4<9b#;^}wo!H^y~SU9`Y_&_4W?}EӡU4 ˧^k|-x]ܙܺ/ ڏlKao'@|2]D_ő`g? dSĐd 4IKTϴBkm&s6KZ^9 T(מB,M& !fd($MSEda΂:,mәdW:Ӊ[%0w(;Nefylx/ª>mqv2c`DG <^;'sqkmkbQYhgATl'<%w'V| <N#)&oe&-0jt JrA( UTKY{~tw"2b*&9B CB)s(N1J3J9Nu),֞5'EQ%(|F_2kϱu&HRBf@tIV]l$EGK2XXAhv;С/Wc͉1!X5xlORPq=* Sj+إr+ٵ3 ǝq*<)0L6Zͼ_> cK ;4 ^JQi"ylF{CNwˆQݭBiNѹ;?Y[03Ǩd]y$1.] P$(V侯|YAeCkٌ[CM9gY:a5˪Y a?w5ӵ{.[=:K{hx23kpM`ң9egP`j4DFDFKwD {hH 5 WFE界Q! d2`ls̔F!+r%:}e H &`+EKP&I Bާ:+uB1$L!e/Ks[of'k<]f\]iGBA<1Vw~!ucTDA ׶!֚-h>`=QuiOI{J kgEKQ (^HW˖[ P&%TM(~D(r]ѣNP {悇%hn슠ج)<<4kǧ.f>zk| &&}t1&[wgAK3d܌t? dFkN;?O3XFBjԀil5 dеN|RmKDp0,$MIr*[OZ2-]PZ2X2U> D$B3qn:g>y??,ۇ~s5Ż ]IOGkcC[77?.ph])EzI`L0Z VY*`K,s.%#Yg6"0%R! )I1HOB`+M ٨J]֙8[F x:˶'k_={js͜b[v^3b+#֗y^:^Ji2͵CvY&Uϗ/k%/_G^iW1C?qO?I` /pgS!|eyeet.M΁uӹ {/`_c{N׷N ~Qވ+mh>8Zۇmgpcqס3~1=[,\/7mFkOsq>CÄ9pTw||j~92gOVњ)Pxi*LSɞD=u5/[5]cq9o=0+~\?7{H߽1\/=2kw W+JiK~i dBj6lr s7ps}DgiG4a}os~Iܳٹ`tooJc~)=VGј6ҀײRJWLiϽ4N5BZe<&0 q\Y (/X/ cPzjkUՀj>CZaELƋ3%+uRxp*UwXorEvu&y;GS2Z"c:gkx8[+gPwg{I{"/ v\x/meЀ8sR[[+Qȣ\/W w3xىL)/yg_{*m( )*0.6.2;bbMe :" F9ƴ51"ӭ"x`G1 y_% XRBqڻx6:gZg-f_ӫ,ne{f:ce~8h2z&Kqy6޽G]O5GpJb Gmu~'FFu}o@?>Y^sҪo{(FGSZ5 (ɥTmr |cUBmMrԏq!! hXSLYxkkd]*fY=ec4ު5UvThDdg79 \ .>mLblzO(|ѾKf[&|f Fo6`nZcmld2nfRIu<_I<,*}|*ƗR*(/NCƬuZbSe`+l =0Jfc)Mwڳ!RMhVk]1;^e}"Ci7,Mt?,urJL߿{*0P R8ˊs^zq ..8 l7t+vsU9Tj>.Avd;.LWQ~g}r HdcT,jo0*!\E2)%4u5 6@9KQYUʅzV5L*oW'X]WV#Hsze K"[ֲe`Ս`IgY;fi2DY[ (MN/Z_ B!dpuNVE# Fr,LkPylHǘOZoyҼ W[wɿ|qzmZ~bqm|1YN*1Ǯ++^5+N"|"Bd\}5N>ZcW v+U7r<t\錫GO2՚FV5c+52n[GLXTҎ-zQ2;|A̰(Hn C'@vSR967' /b70i * ֜%++x*" Ǝ+R3+@ HAwkDpjUڌ J(nMW$ؚdpZTpEj;H%WWĕTqSr`u2"B+R;T5\7Vg\MWJRHN=5ĮHcUTi!jyB":Z ZD3v\Jq5E\͍R * ȓM'[WRqeb$+5XWQ-'2JW7G)a{p\uM.\7| \uT9DW&S/~T,/%*/:0kU;K[R2[sI:Mv:-^[uZr_^ 5KHv[67n/S4(@$ɑ`'Gr%ɑZɑJ=)zrH2!\/\ܡ5v}MT:j&p[4hpEj;H̸"EP)=֋x2"(Rc=R9_2^WrT HW$7Ul"3&+eZ`&҉]\ H}Tj"5ʲpIu+X*"r+R38E\Y?\Ev H4cO"\6\u_~\u+[Z>zTʑupe3=sͪA $m͹pgMq`| Z쁩q7h9|A"ԘsaHAnj-U l7Ln`UB"V'(W QIVUTi@d\MWRT& "\\LT;HQ)*:\p P S7WRdgph-pELW$Tp%Trȸ cVd]`uH&NjюWggpL2ғA5*\Em!M%f\MWiRrIdpʵ* Ǝ+R2j̪bW$XwErM2 ]Ezk ⌫q%ᰩp)6xUGrDrհ`GzUGf\p،GMer+jL%$); +ٮN6J@" Z/'T碓tZXsn+ܔ%)RKP%]l:},_. yr j\iSHcH5ٓ'Ǹ HMWQ-21v\JW)OL$+Y*"b"#2ep%:%t.l SE3v\JMW2Jc2"2,RF@*u$qڧҖ$~-I}:Zvy(nv>f'eߚ7rOOk@-&=_NrpxF뗆(:6h(T{G~h"_]^V쫮N7=;,f74̀=QjofvoC>헌wî_ʹZuKƻE=ێf!i65u[l +@#7(O,|:?anf<5uż a}8=@os9_.U?'}swq]祾{wXQYwBPVZtVK8-RI;mtYFmG[_fI=v}ݞt^]jy7Uj~z^k&:բB\ RK)DUI.YFڠNZO:>J1IUB"PVYS**ƆEEq1Ov&[Z1H+ݓi!iRMXsd(#X Q&IZSKƘQ&5֊U$( N%c14JaԭR :OʍԒAr{yE5KYKRf›,NPRzl&V!ZIFRKA("M)'͐f07:eZIPtPdQ)bkπ@DK F=}ַZF,G@Scw ۼ,QѤ CdJ K \*ci|ˡhØ˯ÃHC3 <04bI,\7χ,SRe1Җ  ,RR2!'>>s9 AҋqkJ(%9ʂM9\ 2E%d&oYG[%Mɇ9H01DD6j9RD+V #etV`+\`~} AJm̘CD dMDM1j0f;Dp.IgV JBuX%HRQ]J u;aJDz.ϐVA%k\0`)P[݊ NVh N@k!O Cu\qU&' 5ѤZ,TbպJJ@2Ɣ'-s3%"[MrD$l 5Z 3TW˺R`t^# L6QƬC5,$19TUFdd}i%e LlQ<)Sm'-tE*D4`XȌ2ڄWF9–F)GDIP꫐Q64~ * *@5Be;Ϊ"(X ^P =,R }՜ 5 e󠨌[K@4rJBB( 2m Ќt4gJdPrS }kJʃ AG"g#n4!w q b0& 0TsB`@YC8%@ qD5T&t/fhS1 E@(ƍEWPfҙ5ƒ-d~CB['O&!Jۡr {AeFTcܓ. R˚QP-id^GHH̯I&:2JDA Ք`#dV. D"޷{2=jZ jT'&  A^,&ĥ("3f' ;&T/rvH')/U8` wtiy8ï׽^4JF#-}ۍiZ$,>&F@uPqi&t0CJAh2@]$NjX'?;BUD)eI-`I~D?@g!HEDB>` AJ./5,ՒnD)hʠvPJNZ%lLj@h'];r.!z-b@ Qٗ;tFNmDXjr!hoX\].s]LO, n=AfN44z:6t f"wP(r(XuVLk̪_kj99%E.Ę@9YDf|5l}IeFhQ"ADy XkږRA5G?#ڍ Mד&%\"uF;(X&4 hT UڂD2PAz{Zݿja;l V")UJ}rUpՍ6rM%ғ3FF/L&~Be IXQ 4vyT,+~x.]ڈr1X *S{[NMŒ+~ҢǚA*Ei J>T_3^F &C@ vC)z9>NKiAQ+MQP?jlaJHN kN+ (|4JNaвf@h2~]H O4%0h.䤽¬S8 8e %).XI@ b~;pi0["&Xb\NR`e):&Aj?7c#t$ 8uSL\Hڻu_]шw`9!= V~mys+nZ.7[{àRGs,'bS-gWhN^ N9w7G1g?Cȏy9toAk5n.W]׫/Ww߾j\o7ج.u]u?"݇{KGc[0Nw8y ~?hoʘuϾ?*}eơ!N]9.kz\ykSye~Z*Vosuu}}78rn6WI@oz4,/7H{,ַwN8"~iCv |c=?nfcz uiLl +{O<1a]1Hl}Hv+pVK!%Ve`՗h>lbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊VϰA}U" pحzoJn%ڭ<ev+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭn ڭ  pح[uan%ڭH nv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭ~A9$!٭ح7[zv+V_*ulbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vڭ>/~пΎ^;zC`ع>~ε0= sH0paԏ?wz@Aq#^秫'}fz\ihcO=6$ C7~ƸS7"rs*p !.9Grv" M9.Wi^Ke+mHh]l!YށYLcbwS4ErYd߈)5IbQd5 kd̈/#rwp(a^W N;M̨ |~yK\t!ԅdXz :;Ֆ5E7ZةJ! gQ1tpj ]!C+DilKWgHW&EeD+1 ړ+@)k JPf4i] -m]  oܟ%:GTQ$B9s^4f0h5NWRΐ3 +,m ]!\jBWPJ3bOx "\ۘEQBO%#-]!] +B1+DkթB+ˤMZ`"9 M+tU P2b[6=;֌vˏLWGRW@WW7B^R52Uj(BZʸkRn졷̶-@^mc=*&RAH[\> ?(ݰأnR\`!Otų}E1 {yr}Ew춻i^f{o9 <HTBb /-2óQ:[/dp [,cvX-&;-F-iL4Bp r0';wjd46AC[R6s *"c`BsA#9tcFrWe>v$w#jV6mN r"BWS+eKWgHW +dc jBƜ:]!J[:C\A`ќ •)th>uBvBKWoBWG +3Doz^thO_]!JkZ:CR(ڤ1tp-i ]ZM<4-] ]i4mOl8 ƨ+Dũ9ҕxw .'Yh Jln@4F7TpMSt5u g֪1tpl ]!cJALKWgHW\=z_B7jLT꓏ JFeΑ"T7i0 .k ]!ZiO-]!]A X5G]!\Bs+9[zklv&7)vnTY ;x2] g~_ѿWqn4/pȂcT>f C ^Z\'hn8:&:?'ydul"$@͓ ^\ЉPBd TTL֚x;Q/j^!ѭ|쇿y%Lxg:ϷnhYg<; ɫj&ƾ&|;xw ,[ Pjꊟoi&=jK҅]6rg[M #NLOmpKu/Iwӧyp3w|Hxzr׍bJ$T᭕x%)'|'9/rw?gT0F뽟zگ́<]P???v{7~=7rR9^F$8{Nˢ LΦ~zݤ>UFijMuVY;-b*\1T3c#͊8J pz4I<9-\"n(sH[K&f-N5[kW?M3 ̻eВBd5~h3t,BPeBN$aɂXHo V;g$(σ#&PVXg!D)o2Kgb4јF A.pW,,ЁBX@GtNXΉRKRw09:fY' LFO1Y,`2z3ǣAzU;D ie]\^t$r wC埗DX l5m `NL3P/xpc) 9yRZhVU:NnҽBYKg1 ^r_{iZ۲y7`炿&oeԭ|ǵ/1<Ї$p7xՙm0`Sb#E7;~ < ^}0YW`Ґx<,(Gn %%f4˹A͏0:Ibu'{,GP4ܻ^JX?TU~&;-)+F9^WQW`n(dƃ4^ \C|k'A:c P1ɸ+!IsL{F+v;Gжim1m2 :m,ݫB^ѹKiUp%̝QdIKDfB`iLm\ZEM9ҟ"|4`o@ ڨj_ ˢ9!̘Q1X+  .I}Ql}n| SP2QeF7nY?X}($¬֌Bݰ[@T(>z*UkefαUGGRGS1OY8PBIt"XU0cx!w$_¶hIJ鷖D$)0\r gD xvD)DTY(}Lb吶:HS$RRn]tG*YP@202JY͒68/K,tJskE 9o 6hV *(TN>G> Ek>_^V =75H7BRdhxTZsg׼֟:U  aZ3d,HRH>eR(9h( r)hFrYObm_VbIbrk믣R ND1 {uLAڨ% %VNbaE09,N"txz%W{Ԝ[ȒyCw|ӹW:Ll)pЅ f[cdxnr0 aɷ_E՞(u^R)%zB jIMR7drcWJ#I*!?jw 8L1||vް1%SV8@L Í.DqS65v:U t-eN)+sB Ea |͑K%I,cYY̩1̣\0) ODؚ#zrȻP GxܗQ~ bW}y)lܪz8 w7yo0_Fb)htΙ"J!lTQ}1FC`光"zKe*YJą3 /LNj͘ ;pكkY_q^n7׌)1qE;xta6=VeSr))  )Ecp[*D,ΔàdY(jxbAGpW{mti]]r6$䊐/~ ?u~tF3v'5eu]9,GaAԳK:`4Q|z7~AHo+\س\X_o<ڐg'. rf~ 3,L"7M%(䈂r0Zլ%/>1(nxrkLsuQ'O*7PT :MeYJ}XR6nuOZO>;mNMUu]Bu2GlXU _pPEEWΛau~c9N/J eKJZ9O)^\?~ G7@RW$: l~smVBRSk>ME΋/g>='xRn 7ovsyQmlmC,9 O%w =+9+,\v}-^Fmh|!Z8a*8+jG* i "X\xp)i "3x^@_%g%. u S^R EԒ^XD!(&8jک3Puh7rFQupm>*"")J 4NH #SԈtf.Hsx$]wEU2^ׯ ~NLpd~V gϭ?{{_)7\Dh`ډR1-%);cJ'&hSQI-#Жf|JĠ[b8b&lf!6 8c˜4" I,ֹ@+"BL"X+ FRsrcwjOH1p.}Gmp8 G2j\y&/Aɇ-&ט*/OG+l(T~My^2#L-dA\qK98RHZi *'9aO=3obK=keyAVF& Fk (Jc)юǜI欘7T?{۶"{C@p:mSܶh[cuȒGS~gIIlіeږ&-wfv[%:ӊp݅A+]`8|b쵞sG$䒣|k^Jɏ7yDGnZsr/yp83[ְBjG\|\y\~.`s)+iۈ^gL_ 2s)5WV 31%Vɪ{A- ݃]x_R*ՅMybyUf˓\&򟍙Wc7yht_)Qf⨫W<\mP]:o*/lؿB ~`ӦC8|K->.Fܡܥy a*tܢpvy3&n_69&C?p[S.nqgi7~ȝh#œv'Gn1s"j:yن[ˋzXu*&0@^K@KβDlr& VtD ej%?ٳ<&1BBg`x юuN)O^RHrAEJ#jz'<)ʔD^+F/K`H{ZZ<:Zuj = Opm3&(en&}أA?Oz_oJ5WW?#%4tTmG*6 *MuCL{'v `yʐEK"`yܓh/Pd]tS>RlJ3.>B1 deZw)'P'0=;_@Yb٧9j\~l`)<[+=@ ëOW>|<!Zب"ɰTZq!Bt@a,s KYG@5j'<0I*& Tu! h*87ޑuѳn"u fD}%J*7-4&RP-b_l8O51stR"J]:y dnTo }LW쳫zgͪT\q2%\[sXZEQӫW8!"('j9M&p)r~n(NT`=uU9 lJ-/pᤊGO+.9N+N6cX8Ut=\箿^_UV׻,ȘBP&di;[]M62~Z|1>yN0BDŽ>.֞i~uQ_Ckg>:ln~hzgRlٙ3gwr{9+b8#`RԒ- fXc3fV dᨲOs<@O.7<|2rsljY%R'9 .Q|+jQ*8Q7]yO!ny&a|՟u(9|iKȁaTE\|1M"oQ뽲GU(ԏYm"ߡG^޾>7'N޼{Bꠡ {wu@ޣך;4mj[4͍آi|oӮljRKIg2!Knf|P-$]%Ы:DTS|>4 .1x/*g K1K-(W=83! EdY\|6,KYi} 97Pߔ'DG&#>:ċɌ68ģDnM5jM&vBijyv]jrcΕ <`U4i;2m&m2̲(:l8R"q"`JB(Y=ј.F{DĻD"wMR.X֘x&cIc$Y\D-I @NP̫`]TN[HIBar'F(ɢ"-ֺs;[WA?\iN=M=V Ps!H1ēɆ%l$9H-&] 쏦#8SA1)wI$e΁ E18%"JIrz5Te$QnBIy ә$HsJ$jpLAsXZEJ+I򶢼s).#.K 6ǍFFSv7*̿}w:!K:Mm)L?emd:ps_ gU1ȺBw}PĊ;s [p(8{Ի "|4X =hSyjb""HQJHM4kMyk)MI˼/;(#fXxEĆ`XLռR[.yt ˵ö ;DPOYsZ< F$(EY]OpxAIx; ڲI˵8d2?K?Ep,£hƱFPeDBLhYQ[ ٍ*Eø]ﻡwkT7NT7T7~:_Tt 3hZ`VVwi[rq& nH\~*תUޢJעTSf"zwmmIyY v(f Y 2IAwW5E*"GoE-JղcD뜮* 5c L)JJEHNOlֻXRuzEKC58,!)Jbaj9C3_6ӌc7B7`bVzo3ޓz۵tj`d(]rYQN@"[9'`O|6 4Elua*ًNIiUaS[-$ T@W @lTDv1b7~NP̾v3 j.Oդ%*b6N Ƴ9(:&`ə30kXsF3d %0Κc VٲH AT16~mٮT"y}`=0y4.$ q׸?vqx W%wkA{ Ro۪H{Wi]IpC$k{]_P6 lx1U\m_Jv@ߓgId&YQZ VN_\Uiz-x<\U)W_\S9 a;p0q~^zᙎ=HJ9  puJ`wicߌnFK֪<^w~5zC~;m80u$L-/뿎.եO>?cR 滎E8ן~};ɴQѩhglI} tyb>3jY17&w\` xyy1-/onTqEI>U+wi&짏G$Z90ΰY79IgGiJ i"@S~~Gyѷ:󕛯uvx|/^^e-OI$`Pj,A%edf&ڣ9Dh@g)豅ߎ$/\{AOO!cS6w9z&꥕IPlEav:bu*KћO61'?qptR!$;]=ï_Akp|7_hUl?=nωRwdxƐ̮D煑"dsYi!ǻ ȌR$Ȕ3 (&~\ 9zcb>9~o'J2a4GU$pؼ}IC)79^!i6"}L:Qq4`![N"E#K%BV%r%jMYjHzXI13B%52sZY[dh&΁SdXJ͖d!T"KLd=)JkEKhHKagP~aC#l4a3hOYJ^ $i +12'Št1sKʣSqM&lO63O"r/B2f+) )s-ZI`6MSI.)׼{[s1$tooUwp3A뛇o[_~oA0M2~ 8iV 'bro%ypTqqczT(w>M}*ĄƹlJ"]i7 s!P^ `}%B,eȰma˘xY= 8kMo^_| wϗ2vLplE-*4+Y&# xۗ`LGw=|T e8z7no~LƸ`ÓC?r| 7!7TG'PImGAqz I$S{)>@BFYt")#{ ;HHL}zp7jKn޾Kxr>r(OD鼮8pFђ ^'OCP?qhsԯٻlO*ݑz=,cUquz_;ռl,7\Oz&OmwOiSG^!>UpkEbT{em2F²_̘$TZt)@!i;]pZjf9M 5gjqcYx/v\^p^",)h"U\)^J*RPDކBǍJE'+2)70Ωq*|}-<]唺QMlA$dN΁VeR(PD^h/jkV =8"5EyᓗH0ǔ @üXƒb1d]RqH3qLTdQ'OH)'*U4 HɺfSOAk@Bɲ.9[n\@DIZUd2kRTYfFf=H 2j3q7Tz[iY4]~ܤ=ɑ=w=wDb>%Z?{jm=_lT{OڧᅯTMQSZ DrY +1XÞ9`TOĦh+%%Q7XT =TqHK:Ji Lj?4Uaa38 }c,t>)n%|_==ϫ7]Ow.y9] q:N/2%'-%(.$R4sTYϗۨ"BAhUQ Ҫ¦.[HЩ0> ؒ|cn&3ܡ}Afj]IKT 'm(gsPt) L43ga.fJ`59$"eǑ1_ AT16~:iɑ(e[\b,/K%id^$吤Ǻ-ەPJ$o/&ƅ6s;{'Ҏ<¶Yajn7"{vO;=7$k?O 4 4mZ# k ̃!5cdFIJcЅL)A*&j0~ N!A]=9| Z/8RC3)wW]]\Ly5OZJ{ފ><~jq7v%U=#zZy5ǟFgqtѩwt5tWtu(t%V k(0$tn*׉f5]oy8%^Q7z.D;l]aj+yc2+۳ճGF]5׬~ۇ;ٷ ľ;G:7;w]ei6`}uϦWsr>YqsN*q7{5M/jܝ66R6>1/h4/ D_3O?[ V7~?Ylv]8p ?_c I]p/Wg'Q^E'*U;nղl(O.HA^?}]yoG* 3شUA8'6qGS$4Nᆵ )((QvDw_Uz{q_;D{m #h~~zo~ǣuGGSvx4&9GL.9|.rW"Dt1]ȑ:x}sү g7u4q/ lOBG d;kyaa"Hp 1`@&㝎ƆVk/IdAX7epIT(0<9ә||(LxAL ^q1';FêmyvrOn;c`rF&Ť$>ʒbKƉ,ּ1)0Jتsy<σh,xHf0Y)фIEH,G (:,{xҴ;%= E$]bp$()(2]) }­Rii !',u!|RiBH"ƑrR6ysҹb !D<66"3LӔ CkQ , 7D|JhAweea,GLrQKX#J@:;`M߰##lyM9_ny\ rhO9ny3=a*ܵ'o7/\MYQUX%(U ʔV0bUxL}q SC(ܣG?u~cA37z&&tYQL@$ ȹfL:}mKGbXȽrLݠ@:DyZ Aj ~)~>id=}duܝ%lZwءgBG0EYbii'&6 KnӢ$JђdKk,)QV8dYWhQH>$O A"aXkZ(DCgj."[NVӧcB)7j<f(֑a Ƹ#aRE VB"IfH!M)tHgu8s<;5>qy&h ;LЇcpz\t1H^yȭu lV1QPɄdcBΞjJî)ءep=-Q J!"2-̃el.rf) "fY9nʃ /;+JU%c"/j4G%M켘ݾY }dsDl/fF|~ݘKT`&:d@aS!]1#@/.R"+gE28CHYuނԴk5f,`ZFLAio45[!-9uNmO'SEyT!8g)w[EMiK;{3^(%UZLJz1F磨X? -Bi8)iQod;dt '@h{]p P| |Y0)|*PfLN6*Co$"RQ_t$}Z~a% "@ LVt* x>9 69S{>N>mIōoxMm!v:_-%[I`%T$k{ש w؇V٣Zкj]֭w95vlVBZuwvyΓ=ܴv3r̻<ȷ2,d;:n%KeozKkoa}mղSZ >L:k᳔lE5𙬁 +3iiZN&AIIqspOo \&bz GGuF[򝙾G>\d $#X~mq:GWj~QK~m7;`]T DZg~sv)Tཎ&Cu`RPBRNHdSc0$NC:mC XH`&CD{fhpXn;Ŝ(arf#gmN'^NɻFgUZ}t<`ˠ3,SI?A ; {ltJ- AR +#\T9(>sVw>۪?8zZlX!(ʭ /i7nw77O^ 3ƣ1xrD bIcKdʢK  "ChPX?:9y.ga>_t?ƽ/C{?|d4?zf$}] UϩG2˽T[jZMz!,Y?f0? 3h_ZYRz'XɂaV:ecDC@{RR;7kuI轿J WQ}_~]^3^z.mtq0S`K9;^sXe7B:^rn|u76cð6+v{U׿ \9 }ӫͯpq0Ѿ΋υ>…j`$Qj~D8%jԸe ZAaڀl5te+ 3A*`_LY WZ=,UV2~6g\$`?J#JT2։P\qҋ,^`t^;Tm<GɊtz tz|dj] sd;Fnj \xX ;zqR$G~ۿEwr"$DoңԫcUL|ef iCZ驦(KTk5VZ!꘥qL=L ڻ+KٻZ~JTb]ޥ)G$e 17IT 3RR8K"*'I+B~YmQ|fp;d ,60HV1Ŗ3Y~ÏH#v#ulvUŪdLJKڭ_c^8ރ^WBذ[ lؗa] 65oEX+C )D6uHk1QҢb8'(LbIM^mjo}e0`o}EQ%$&HϏIz2N *,PdH: 0:(y҃v=ҙLl{ս#mh儿jݮͮCr,Z67O|8,$ Op2yԢ4!T%6AH$5-=!m;[5dϷjH׷G[5d?jEH4!7SΚ.)f s(; Z{VoT іnᾭ Q׃2OX(d Pգ@r6:yDD-ke1k0bRCJҤP鄑c3Gu߁2?Hބ%ƙ-w} *Κdž,bHGf<ߗ."5Tƻ qMZ7$d"ȁ;S$cFUyjU]!H5v)Ͻf×ۆqVIѱ/x-BkF E 0(k52Y2^cItBՖ䑋s%hd6ׂ߰K"Wځ<QNحQyx{mNYL褙WlLƉaj "49(ɇ00-G=_sS#2GknknڦD9e ")%srDž. RkKEA(uw) D@9$IUb9d<$뒲;-ȵ&O{ϾpGxJ HɲfD y-"R\H"YE gktQM&iYJdY2$V4؃ sNJYwnVTz[5' K'gM>p֗dӴݶݛm'gm*զd^뻾5*7VjԦ,D{S|sA,RP$ CZXF*FXSR6>R22艝ZF]A,)Ղ1< x,Z*6=kU%$})VG) FkYwnalag-B7­\Vd&dv-Y[$O'vA?{F쒓ȒJu 9'`O|*R(N-0J"*SRZUt*+EA d*";߱;7[t< /V3uJKT 'm(guPt)L4330c] ̐J`595H"ת>[l@LX ̤\κs=lI"سƾXcS:aEvA^튡)Nʬ<mv22Tl /P$VmRJK䶜@jg+Ri93VttJWKܽjKWݹ"~zGɢv|IɆvQn]HȻTFE2QIX|˲J/[Kɣq!a ؕtL2ޚ_g)sG ^{E@яtPF4OQG 6נGF ʘSqHR<~i8|D**9I`"٬M] *KdX#D)2%:F)6q^J@ t!Cbֆ,2,Y/kYwn]rړ#?fvqf.CS6@_Z.1򤳼mD$*`]Ҙ.""H1 GbhmN7OYlWEbVΒrL %p|wR:dܷc+P,i{_ZYo#xڂ).) Mm(.@S,!$+%J,0l龦[)A`H'HFavGkD!|yuaauyPniǢ&Qta)1hw=v%ӃCy?7+%igRuLCFZb筳ZKwdBGlg2exC[ۚ TqBPTP<'mxv[~ occ&o8Ãfc^pO,~=sb]K^m ꖎ56m械fEGbŏLJgypt0k[UVuj4sT;-t6lX|r|O'4:.j6v+oz= / Q{[ߌ~>b?Lrc:[~~>%4Tx#\dct|r4 t0-^^Q-Kgx0XIFrǗ?}b?jŏ/W/x)Z |Mwow뀹Cj/CWMs{[4-UO=۴k|5jƹsIGؚҎKa׭XW0%6}-?z$XH YaNɔ{Uf2$1+J@E IFZ/ٰ0.M=_^{ՠQG!}}SI Q #En*<;-H Y"Bk:jPW2mK]ye˰jy# 9w,9Q 60H ĜuitR!T!zYSljR}Y<т)H$HQ` FLNau,JR0N%CD].$2|#u b@L7@Q7IX)OĈr(eyk؅ŕ4t'W|TLR))"Z`ұxS0g^"m[^m^8ҷK|^{w}}=qz_XC|RQWUP\̠Y`zN3RE9:kr0࠼(}td4-Y%WI7kJ{H |;7ѨFx4g6rg4k o6׽ٻxMg)Z쾭̮?;.{^.Lד퀴iy+(k^lI% [|ovL!uP"4ɤejњB]_ ֣\l-f= B>u0gD{N(`}vvnZi?x}l!^/|U>-<|g::>9[ lÿ~>=u=?p:.z/)E3+4³dfY;O]>ww͐ݳ?ϱhQ0xҩϭkgs / ',m[2;y;f7ߛ5/jqv4x23{/?fDfo^[f4ΎOO]5{Pj;['sVtDW[ՍTьW05)h5İq,ת?[7˓lqA&ɜ*ڱsLtŊ((Q'颇[mZZ!1lvs1ɯ=S$UQ;(8MV҃ح @dtBH‚·gPm/RmѽLێ| Y,<f(D CA򫏬d*`*5!AY,=G}T,jXo1 :F:d2f>_'H־P0hU~vş:N6!sp{fXGxu'BJb1rci"@ź4u|(?{ȑJpgrAME?%z)!]S5$%A8ldsL_UWU>Wz%02iNY*e72é`;ϣP2,k6. v efT*j{d8ϋF2 %aգEF>i 3&u>\[]8vH1Q<3-a#CyU$Z:*1;J^$k$>$҇D`+)0\r gD s| 6,iL吶:z&RRƹt["摊md%  ji utyI$^wK;^NadIF%AcM~_X`ՠ+e^;s$ù K8W9x0Koj:K< )Rt 4T:k1XeJ˚^)7hӚѨHtT&cAB)E=*DA@IS:X .1td+*$}h5uzSN(ta==<hq@)AxPZ}Yb$VKQc2d'BF`ۛ^CG*@]f|e,~ϚTGTJkS8׵ W0aG-] σqSV:@LKÍ.D{NLɣ:ؾS?z9Rqn4/t1ǨYw٩Ziͱ_~ˑho^Gny6%.s΃Ac41a3ƒ6mԪ;uJ.%%A$CL)>罥B$2L9 Jguj Ζ<=/].D(^Č?PBS`ꐒRqЁ 0\}'"DHC}FO70[`y`(ȴ*w3O'a@aUs9.NU\ ]/(ŕWZnY姛]w}f!oW@_} OăSWU~:eM-E]Cz⬽SgJbHvY4z_Z:@$yˈA֤ qF/,dߺ dBsh3PvZ3E9*If"cښwd^`sQW̳1rLC|bp{F?'nw\ ՕdQI?i6+ nRG?4q_YE51(QdyWłBi‚F/70S?Wiz"T077X{W YF}),&K)r&)Y+Tw_nt-k>pҚV?/ը!N1`*O ŠEƘ!NEl]Ap y>lBb/hry?{?͉pO}seX)_}dyz(|Hv3O?i\YLǨE-F|:v!`YRBQJ\O5b 1J/f->0Ifc5oo] /˷hG3iÅrUpyύ]X3~(?yIo;r} e)Ŀ\=o0o 6Ǣ|;N=7ڭ%{EˇKwS9ʹ`!Vو81*2quMX忮F?rnf䨇-4O cy3?(Z?0Wɘߺ=t96VAC@{Ha\ݭA[%:n8{{1hO\ 0Y?p_L\^bSg twnrj:߼lS=M=glJ>RT(ZʐT,fg" 2(!B co/ͼNxy8L`>'m|:psJ9S|PQdfz'pJFRT<,qkM@o O`v=^Jwb~٦s5]>H*+D,Z "ӫGҫVpmZT ڞh<]y,f-Et +t(Е3b5wiET}cf1ʨ^Vd$=U8` `AdOşSohh_D whI$~w̙fJ.k-̝HjS7~ۭgS_HJ(!Dl2o^/{~3i'˷X(B -/'uqvLhYء3?nglԒ+'Q&`>|L8rLseNHGDE.E [x ɼ))e*ߣ'˽fRr})Uu1Xu+@%h J{:DpÄ"ʮ@ݾߔ1ey,9(ㄓXrtKQΒ;EK1Ix  .BQM+Di;:EL7!o'.o ]!e'?IrA[DWXBaWW2 ҕ m sFZCWt(;IҕB6B5thn:]!J;:A\S&W;5tpl ]!ZmNWr%Iӡ+-׺Eth1p]Na7vDEGW'HWV0FDWE ָy=]!J:z3t|!j ]WwZy$uJ0uw+Ջ5D杫4z%5\dcKXixҞgBk%9ޭK/پThitV WӶDkDu '$D65tp鱫톖5%KFl]!µ+@h2N++u[ 4]"]I(oӞ]!\#BWڦd+bwtu:t=t+KU[ Bi::AҊV\-th3(kmȲ0 `# d6&6zZL(RCRr#}m>ԲHJ-naXէn:V}Ds+4]!`E:CW; gHWV)j43c++mW vBtЕc՛ WW=\F CKd-;\Ԡ+սjkhg]2w\u3~IOcTt:&k&#yjh6*R:jFŒ:Aw7t{Lk3j\KRM'Xؔ'pQ@=ݓ+O[g' =9m Kj:CWwf ъGV# BY=tekЕU;OqEYu6'U U<)&T s…ifFݽ*ӂAz n@vE7Z{ P700+ ]\!DW vmKzz;DW0Jtg 챏x8]JMOWϐ$.:DW=^\hB+thm+E eCtUwAIW њ֫+DiYOWϐ4Kt+k ]iJm+@In ނjG?\ѝA@+u@K-JӞ!]YR$vnw-5d}WBWrD]ӝ3]Ն{ԝjtUi]DuWڤ^Uo y 岕3[}L͂^;paz0>־sԭeYk%#i+2uJu)hfM+DrT)*Uf\*>?j9˖zՁ3dpGe}=*ԹUjJҾ3 DH.Z{OrSձpUh([^?dL=ETst;]rꨛGFkM Pj=G'ݘtUw B gHWTNMF"`;CWWvDJs+?uw)vJBZ1=]=CR~5l9}jgjoxqhp,O_^.gq1W]N_K9׿xK)QKROxa 8mMp 7:&4x܋VXRXH47|w[:otN?K\߮vͪηnNdA{ 6Z'I +T<ĨֈlBy&''fƒEJIvB)صWf_^ ߯}ASy1XQ«xθm,!Tь.ށ%gV f@͇(,h!&SynܡM^|~3~W.Ʊ8*1; Wxa1ݟ/*竜 HtceUTHŒJܚb_~ %)3w 7xb~; tfs#bz}PE WHvܒL՗Z b6m L@qH{p輕* 墏)Livr fs| s.'ؾjxsHELY8/. )v.Njĥs.E:kk +!d`$PO1&d?Ju0=|^,|Sϣ%be֠;dc\OzjՋՅkyUiP_&g쪉נ2 `Xpvd}:[,wgwkoK .?=(R pz|FQϥ QrpUSRఙILimi.1tVy?;Qt"E]S_|l=|)ie'PN(>jܫ ],2Q疦q'!YN{GԲɀ- h>wXKF@o.8)$ӅV,Uܹc fFqt .A 3oT'IK2F"^ 3xnIĤй-ђ(։]'^gֈ-K$VFZ8йrE {Oxku.dP!)q Uf,ڬy`;I}ɂl}eh9 Ay!4rCR $88&Fуpi NRpҦ!Et$Lk-$Ez} ]fY' Lpy1 3po>UhWha *{?/6vd 8 -8ƹs-ȿn,/|ZΧj4?O&H4 ༎=`?;;uI^ޕm6mۺ{pRc4RS.*C#Z##N718^.en 7׭oD{ebZ!&G%8h; $9&(.2EJ!Ms8Z/ kT3@+d`Z,R\Q[.ʔYŜ`DP3 qt~](?Wm>&c[nm}?m<3{{ߙ++ɴ)||$0H)e3p/]҈J;tIy ~)$ӞЖf&b=1<b&lf!dV:Ai D*$9Ys,1P-2WENDa6j͙əAT{*Eug  ߰J/M7vVr8 n~BM/gzY`|+' =+}ie|$:΢6RAiV(BIW=4C<4ԳzzژIled@RhM6Ei,%#ҜS )J\wav"ᑙsG$첣|k^O-&iٛ~oGVhgE J"Y^8Uaq]4aHU!_PMVnbbℂi(q؇- b;GMbDGxMĔlVZÂx^20e'Ct^Nr*{WiNerj'8k˧חgW'A'(PQ N7w1t+^Lz>:f9LLbXmpK/v~ SIJBB #/d&/8 uD 鿨C-$ZvcEۉ^ G rٽ6N L=!dC"EEigwͷsp{MRovA…4]ݴ;ej ,\蘿65]`;x²dkn2ɎUk9;)urkʈwͥ/f[EʉA\vE9]Qd?YF=cخ_v~sۓk,vv{(K9[ݭy=ln㞋b%vBFkw0/Oݽ꽭>".xOԝgxuڅu~VڌU?[gD&KRΦ(UD֖iU@'JD"Ι+-8ƵY#3XNfꞓ!I!,Ǜ;Ǐ-m1P怒5f_M9]n^K9[yJ/*r`V1nF=΍ÏZaLgUf*a7F4YE:3SxKfGލ4Q]d»ӛi\e<?MJ(hN M 9c9,HThKk/|r9jĖVf?=%lJmph2q;f]8c,[9bW&c\YDZDmGnP\ .1He$C,, ࠼r:[.nE 9H$^C.pe[6jn9,Xmm-yTtڥ>.". NeBXŲvbMwږ'kPFُK(nŞVV*dPJ lK#Mpeߗyz eި+zL*YQ*v*۪@m[LK+ %8X R1 rekj3KqM\ғalhJNbWէnnbKrh? 9v<;W4.@D'cK##S 7lL'>:>GS15fJ޴$bzYj$Q2G04C4X!@Eg <"yr*@pQtY;.Kq9yigu,9)S`2E2GK ̀%kxכf鹦IXE &@vɍrКb9K #u,,]d$$: dr*:T]E3"F J%tTJ-GLsaV>x7K ogR*FeUyJ5=\J nXOX6#zZy7~$?b5]9q;.>W?$Tc3-k>*pKD,0z\2qrNΌxtF7'4rե.YU. >2O 瘱iswaQ4.и vr|XPJ9Wc 7xt3mih-FwNgQhEE*U}N^gQ߻b\'c[Jyd\}Q;8{0?,ѯG~jp=;_|~mZ:shxv>[nvmqd0j"V;J?c]-Y-]t5#:1 :Y7䴘Ɠ6>Q,dy &<ՒBÎHۋ>go JϝfqX #1|g#H c03w '+Gz4"NoZQsѪգv(ԛSU)_.RuT1%!%UzS&<-t%Ѿ&纍祏\ZȔVć='C(9rc1Qd=1v7`>Cm] 'LUw]N9Wf Ñ,qI=e&/V&oR2x̥zW4o/zTࣟlL.Zw4!T7JV%VB<P FRԍ0hX_[tk):,hw*:jȤs53O vaN<]='#6-[ihgw;`x/ysĩ/y~pWNoɼŧ qxJS|3Y`kLm75jr=2x8x0vQlf]LV|-7B3rݒ.ߡoy&!ѕ4͕ ט}RܯdJ?̕]_+rJ?=] OLWF4tu?)>/+-tw*h'&ztNU$tO7 ÷qnV8J=ըWPbDwYai{2st&\[Pȵ'ӌJ}g;g=9v?iS٤V>ͣR1q9ivt#N-ldP>65 /721 g5,ܳmv73Oҙ@Љ7z:GOcI%lx|G>ږnh_yJz]>fΖ ]MG| gdn}7nU9io?d_?mˣKٛuU*[G MW5-DrKFn.EwdJ.c4PjMY3.R͚aƦi|70XD,\mXSZ@ Y;[\\הR#%IEKAha01|o1ZY_O1ZЂᖈf&Yti:=h vQvʒ&dK*0ȿ#!Kh*gs#Q3{W(QG# 4J@O's?!Mu6az^u*L%;KT}?OJ Bާ$ͣzM5aiH%5Væ tF{Zc|XvIkNֹ[ʱ'WsHHI?ȷ2ShpgBrsç,ƓΚFR-k烿PqFG6T5*옳cPG)$Xe dGh]G$:Kcd( IeOhBJv lt)K (<4:{4:84ӊ#:CI0kD(QTzsPyUoܔX4ypl5Mۃw@\KY`KvU. )>=ޣTB-E9􆧲 (lj RL1 v[]Ye; R5rwU #" +ʄ&ـ,(\Ҝ`AVy}X;*(*-"w@iLb oDXqvL&j! W۞dWNc@!7CAdܡQ lEP C='X !a@YPѮi\5g::LAZ dT jïPfdBz.Vz 訽YRPHq s$&(AeIwB"=dH_(4u `[WW $u9kxYUDI)uCbKRtDRH NJDA e͜`3d5O/n i${jE}∽}Ai!:3-apai-V,KTA BqŘAQRpҼO0`/s#v΅ ]r *}tl<+MT$];-6*d`fPDd4XP|Ѭ`QBą|/P__}bHDNkk*Fʇ`)C#`8ӥiGt/1 JWLd:[nmg$@f6/EUfUSYߨFbΩN&dPJii&A&}3h /fsP6DR& +:28Œboǭ0 E"<ѳf W nk 9%bkVcCX DA5DC7HX] @N%m1Y5Hx;:B5^Q0 td+!x݆i63+IpXT&*b,zP>qVH bl]N,UhE4ݿV -i  j@eV3 o=JP)!V[SW`$XHD)40\G÷h5g jxaȵ6|;<_@׳eycYi~ɹ|.b5 PwM< pff=+[{ XQ/M ~wV8Y:u4kZsLڌQg5rFCk bLyӽ[*3&5y% xKd䰮T9 ]Qn$ܡvA,Wsw"SA )ǂ,@z|!(!=:T}f=. >m&"$uSjCn ẟ,;:Y'ycnUè#Qߣ䲨HitrIFn:T,~ xm,mT11mAiQ#(VjPc͚ Rת%_6LzЙ dU Tk>Q?mAOy~9801>@ͼslnX*އY;Tڠ@4XAҬ5MD9 Z&PZ #WfBj~Z)AH'k|Tg= Z66PCX6:'ɀn -E4bt~֘-jUez4&XTdǢhrԊ7") %;)Z-, \g'^]wԢ`|J/ 0H5[xwcuVb4jGKt/- KՖ_P~kOR]|KaaJRKy^ڳ;!ş]M;Ze}OsI'ׅy> F|@/ A|@$> H|@$> H|@$> H|@$> H|@$> H|@$>H9{Pѽ-gJ'>5^|@$> H|@$> H|@$> H|@$> H|@$> H|@$>ƒ^`{IoO8ǭ~^wɊ;xQ^W4z"/O (~ۃqg]ճexy|v< QO~߲uh'!Zxtrm1};5b<3o1]Y;1e׽wW=& cviȘ^1 h~4FĘi7^+/Z_SgqU6}^Xk"m: /{e߻o߼Z7ooH3zBڤS?81ڈOx?ď'~<Ox?ď'~<Ox?ď'~<Ox?ď'~<Ox?ď'~<Ox?ďBxZ0{db_O4ᆽFO29?{DJ|@$> H|@$> H|@$> H|@$> H|@$> H|@$>z[7㣋y)?zs}~?sf>?B(C>ٖ6ٖ(*ƶdmmAlK/&LeixhᆽdhRϝK^"]YT>]r{CW W}+Fk +r.}+zᚽ)ϝeztJ*]1`W-EW6NWRytyjO㣳z[3Kp O|zOVڂY뵛~']T]&iI*ZYË]m'߿׿V'a^f\<Loqno=Bۿ:6g/]MuXO>k"7Zy0y]k}IЫ5/4NGyL9D~-PfD{d*}y) y8ٻ6dWه~7 rb}xL ~w#QTcNtWU_uuQ^a?j҇|rlsݿS:Ż=XMSWw)'7Z_=mgCl39^w%~㷃]񪗓WT>oyn& -Wҗ;˞.n6jՇXO*}wsM(nʲY OReJEЍG!]-oM ކr 8#^/ux|ςX┚J?]:Jɧop=e]e2mqUb^[צblTMOC&t.5x'SwD3wQZ h*87h8lJtمazeY|s sNmj3iNibOˏ"=me{oid3^.~?_]Ϲm ,h^ zsʩpWy<NV{ߓ WHïۗsڶ+#jm52zeg[G\o>w9ʙH+)C,͈m '`6("Tޔƴr79xwѣsb,5 q1q65Zn\ula4:ML^@@t~֦ϧ_PN2'JM--Y>[XexwMmQ:ܲ` -,qlQ_Yͦ^^\p56?:NI=jb HRV"J]9ydX3/To42Umv]/rWY,tMipdi/"%~y84.sCkwqP ;K].IlX8+bKq{.:ʣLD˔1Y 7J5ܥ\FS𠢷XB{gנ'Ss;Yنh{ DF(ƅs4ec]kxml4Y3cR!D*\wQi?y8}>1FBZ%B>@k$i!9X`I 'b AzK鎥#8SA1)ked8DR9!*DDi8@ΜruKʣQ*o*mK6ͅE %k@Pe a%sBB>:Ι|soc#o8QXkZZp듅[>GJ]4-?h>n@l Qx9si =VZ8EfK Ec@qu;⭋!sIH`,QEv3OeBҡDIB 4^mQ];VЛ/&,\)ǥ>1[ ·~Gk-8+95,TpZ`%I7"ƻ$C_"[M[ l|Mk`/' #aKjPȝMGawxW8y1ZV"q(m"WYU!dLUv.=y;1%Q`zzӁz1"|4X =h୍RdTR{Co[N8z_:> 8)c= <s x_0?I? vxA_owIKyfޯv̿]ZV{vښhwɶGZO4ǑGCZht +Z+#)iN@]20 ېJ,-`zUWWzX$i .Z)C*(MDH<a U[k4OZeM `V{}9 ! qIҵW;׻ݶ {DPV*s_Zq& JTD'TEQ9Ycmb&Q?ہu`fRމJsegDR,RHT;C!0G+n@ 1P ༷T,AI]rgW3Ux-8&|겻T!ޑ([/>|j> eD NrC%A)$hÉ XJz"e>і3ȃ; ~|G<<΁hٌV-t30fGD6փNHGJ:'6"I $zrD?wÌD&׀Q%pњf\3$ s  !5c D9 GLJ2 U$>|NltYB0‹,;-'2)_;shƻ. ?a>cgg[9whpzJ@V ^ $H2l TKrCj4'`Q&Y۩ş 5Hr@@PN%99ӊsÉkO' mߨ|8jUb oyh\(mBY`2Sެ_S$oqఇ>4ѩ;4tiho5^t8nJ$&p>OAwa_{\'%WW8O猒 Ƅ~3y݃96!_W~l./?x bka.$Gg狾vӷ[d:jl1LdhkKBo颭ڌ&meyFa2m`@Mhږj[5͍8iJiW|vq]FEl\aoD#YxIcu Խaӷ]4q/D;[HTL1,EdEރ`#>PD.f=GsZ7Mwmiv8}z10](c,&]qd%$ӕVV<%[ <#$%7UN Gͅ&\.NS͌4)(%)h4x%iJڝ.$8lF6i_p+ĘK8孰`Q{b,k CypP)jcI&.ɩB?dJ+&} O)覱Ba: A8aya-"I)}#>Tڛn8ޭZG)1H,m,QN^g;X4m1 6E-az(5V 4Pb5Z) *x8VV]F. f{!渹ΐq3܎ sqDV89yܾPrev%FNiR}XjT (6(SZ8SW aނ9mʢr@n\YO3__WޚK3[rs<#δGG$ϸ4E KnӢ$JђdKk,)QV8d١ғ |.H$gD;"LQNTXK"!Jyc5 '+dCRKۍӧcB)7j<f(֑a ИA[R.#RHD#iF'HȳCZ;r 9ܟi |_|syF-gGG熡H/L>\? H`W:2Q{zTa&Gʏfaa*pp9}}˭n\}*4K3oF-6EF%s&.2eQ,40 gOȠPGjrm6l]6l,{Vh&Zݶ:G&y6-f71 ?ˆZuR.OKQu~Ry35/h>Ts3? xm=VCb%g;pcu0HWZ|§g`N?ұl/9#v2J&/}J=*f|K"owEKJ+J!"2UỸҠy\!*-yNa A¦E>EVV OZ+!?)kl2mϭ~)وqF#pnUҸ b Pn]~r?y0PLJC)rTTɒ2`+X|[?aƶ?^O-sgFI5,dƕ/|W5w{Fc Z6C7?`1Eզ|EyR&X$1F@Mœk  l{s=52:9y-Xhm 4wFm6*GT K]5r_Kq Xǣ`W}=e\۬+4KkQfԿ|F .TIZ2ii,و>z/@#7͑EK#VcTSu@,S!LGPV:fi f*jwLbb3ւ\d2b *-b iIaЭBt.i>ZPr茓a-O#ČVTJ,nNkr/WN*>USس4# N:g_x\A%_ET{6;;;-piiKP\䄩!rpu'Q|+hSW|0cpVRb^`S19OL4r*=.>ew@*$O} fRn 5s+gnts_73݆LĿӑt̽D0*WrAH0 Ԟ( NnvrnTvm76헏Y;yot}J/9*EȰ!E vMm1.bϚRNOg;[f cX59lcg{?[p^0bA`,(1!S ! Ƣ'+8q8_^jC\7ܨ^8i6K31 LrL`#bQE͘a8 D x@RDB[ޫ%2e=!BO`uV4M+P~|a!=<8=(ʟ ɥ~:K']sI4n F'>,I?7 '0&ǏoߜhV ViiN隘4Q␲TBi@N5CaN>] (>O]_8D׌Otw0S`Kco<zp\[V/I+n@y=^CKMy%C gR}3D;+ބ\@](}d@'1Y1LJJ6XຟG,^{s9d6Z2?ϱS@gr(RV7.$C2%JZ96a';]Wѫ&9=+>LaQT 3))lq m.|7$ײ vxV>FX!k8 ͉hLق$. g1s}(6#iNIeM2Z E+X-QEt{UZC[ų?]j.h WN ViDfb֪>:A+! 1 3y` faɼ 0\8N. .sM|;/7)tnk\V)f=L`M>9-/ȿ!  =^ϙY7S^ʐ]MAsábt19Za Cyt3V%ˮ\ފwp6{GJ{}{I ͺ_b[]g?ҹ HX*r+o]oJg>3 s O Lk`0.f޼Rhټ h[(5m$<6޿1 =4O.VG]5WWPÖw^*^F]^Qۋ+}L_NA̎Lc-[=1e7[B6 6O\"ne &2qm Ah3h8Md홯6]Y6^?2p1HeC`=N7z]Qfm7nb'1w`/)%E\NqKQJ8&h8$QB, E'yw<}J9So!X`DR3 hi_ L8 C.-P\0cʉz §sHEX4XHx@ASǩEОCAv/t4D{S(sVA-m}ʜCJf/hlF&q_Y.封j iX^ɉvT[g~n`,YqOb=h `s;«Ds[ǫG]sއ)e%ga.ǃȅsc̱r2b `V{ɱ$QK-9+J&]f[&!&b/ɝǏI)v/J|̴b ΍4e U cJ#60k*MMHysQ b+C,nNR`7Sn_LignCp}9u{Ac(n> <{{C_o?0&ڗⳭNu^˦.1 > Դd&[RR L 4+p$ p Fٱr%(==UMN[faa*sOr04:!!?Cf'AqEt:NwΧZy_RDf_ʽ%\_Xݮ_8{w4yszd'i"ԑ#gϯkrq>ך$~rSVwT d ;rշnvQ$Wl"*ɱlNC䪔R QF3rWU`|ULZ'XJ+^BcyzF]Uq%<wUE5twU$5Ȋ 3twUŕ]UiqzUJ2zJYDg䮪*}6JKIc‹tW^[y}KbVՒ[pfhHަ y2%DGA`焧Pzl`ok j#˼ )mC_c$Z߻FW-ld5ߕώ:?o)G/R&$fŎ:r>mA/0׫[z7HjoyP=$_Ӈ3Ń)`&sꊕ#+egu.8hw>A 2Aʞ=*A Z}fNmOgK2֧\j%AGT$'!dsBOTbWۦ-BJe AJ 8`]୷QeT΀!,HԘ8^yP^T~ka$v<1m\_^ryeOD[˵3{f3*+FkFk%JRaBmXu ֐~ . !KL LJ%&]rQUH|-#fgϸ1XjF6>=_zN-)6*̴R1Zۮ[}k=kw#nV6k6s47}.~dAq)s8կ'Ӵ:dчvBԪcPȃшf4L?.ǖ[-'֪]gtyN7KJI|(yQ?s?8|?O>UK}mq ]+.bЊIr:;;}U2vv1Ofw62N]ӓ+Eg̏ۺ-1z~rV'ս]KL͟޽f[i>6|hH6tmuEAa֕M R{Zg卞]yrzry[G uyFv Qg)/mo>/q'eR!|Nkϩ'q"߿z2[%/~OT|ɜ_2WN Tdp".NKR9R5k/K]I+*)QXсNam2Zp,%ZBR!s*ITlCVk>Ny-}AExplBJ蘅W{2cI4-9 +k/NJ`|pAkJ] ۘ 65X5~hI DNkH˜[Kml@6v狽#{[{/wLm@[,dИ|tJGyn|\aYmGcPN_eFc/̇\[\+o@J>w/gLE$}U ^S4 C }XXYHKiCuL #ӓ19K#o&" |ĹXO8ќ{3l] vڍ/aok75V-,wjO(c*ID[3lU1m<dLF'Sx} jpگft֪ۨv'o :o]f nvi07 n/]~ts `LDkGۨHozoْ%ھ k_ e߆QPH x(!ɯbp.ia#H9La+LJ*ͦU]͇q^Ó4>>IݑM>%NxFH^mQTm:hn$W?{D{ jݞlGDH `ԵKcu}Sb={撞$lbWGL1D)M &KNV5x3?7b}m`2W]΃ԝ g<5IEzv|(nq֡3 ) vz6=UZ3UJ;vxkw[E|p_\9 [XDg B,:o슖9unDc/'`IG G>j/P]BAJRB,P&@%@+rvِM!9jqs֪H+lrNC>Dd)Uq1CT&Hݸt8;TU0, ￵Jk2$d - )ƢIb\A.@1jeljqzu/N̪X ꫔36#%:EPsw5.8]89nBuLOJNu:姗{F-gصx|ۓYY'/ orq3Ո|T-k?Nn^y]#IE# iHEAiRHQϠ7ښR A:SAzr-U&d9Rț4h|a'36gfX/lB>/\ITcE[JUww>t?dv{l%Q43z)(P{o 9:mlTj^@9rM=6Z, r |C(Dm)) $s8{p~B̡xf1mG="ػ4>T$9PI(쨠  WNgbg8fK#(m| VqٰIYھF1‡TTlos?fǾ5nGK]k#hPG>J6b=M[ -TP۶6΂!X0Bh0mIK2 P=b3qvzQna\Cu6Ӓ="@[b_ he&"6Z3X_e{ 5"Xmm-q <_\V1Gpa}]~ܩKbEZ=/kw(Xq>xj@țA4]pNu|h衙|Zr8Ł{"=]7A tuW13ҺE  J49 Xp$oG^C/O[F?] ]hIE G?AbJԳcI`-DgHDOr*XZt6%, Ŷ.m&ΗP_" DHeusD,&Zk1Fcy c8Y䟬 >(AUgnSdE4tlіjPnqU `&)V 2NH Jό'J́ԏFZsxNwXwSeiAOj|/]m1Q]wA"A^p#s0NڬѢ> FH(LMyrE ־}bq577~l=SW;6V/N[li;J{Vj#ur#N>\VupjL3u%Ha@:ǔoRwVT X]VtfgZ9B짻GC⒬qc/-K㨡M73v< mT/|1 OA^9V,}tR} Wf^To! kQQ!N!ɴ"㺢"'_HEP1wlEH0 Λ)Q'Pvyq W`.~HmM+;;[iO^dkY^Y;v\%[IahIjS_l5b0uYz{0.lK ;}ӝ')1 z^i)vr̻Nֶg=F%ӂ5WMwM*L|%-qZ_%$)s;Loi;M!ϕYn6 e^r1haCիTQhs NYf< ~~sf` ĂXPf cBbfW cBFb8FRrDzcx+>PPj e"@n;j|}_\Iٔ҉v%%:Njf4kB &p ֤D@X9im g=GK6n``x+S{M0-0bGbI~Hqf,RXǞ "k/%%6'%bi1g;Jaظ#g^ly)/- ^jB \6z<(^8i6K31 LrL`#bQE͘a8 D^/qolrίGq< yZ~%/h|ω08 x2I>DGg>tq֞r8\Wɗs7Э/-p#ÔTYs㮒K>~xYI^ =_.ۙE7_5RLAơRrWf䇅cF>3ĸ0wɗl; k9'tТ1 .V>*u7A*E9qutt)0ޭ˺oJ$ℜO L`0A^sssJ7~GwriV1qDNP8Uo 0E-d,Qٯ%=cyE i f{ϻ")V?Go"SqůjE\G1JqsT]X45'w-2ڋ_-.gp͎LS *y*~yuÑ-Gi[$ʆG&Fj&h"A{Oy 6)㚒b=4v؛w5&PM9s/ϷT&#$7W2.lYM޿K/"]WO="1l/?'{4<\^p~QqBWE%Cb/r@Y~Ig7|G9Ǿ(B}Wy-_HAL. ٦Z6]o[$/zϖUfeCk1Ay>QjIޙNG*h<ӏRq;5ej+eJ[|lXHLw Uf?K96yy=Ou> dt ,R 0ͬ7&c]HTfXT;jp /|* |P!˼I5^#+PqjQԡ \>:=V)n[EĖP>_!% 3Bs467F.C\~CIYgx 'WilۍK̒C}:خce kF/VJa8Y=Fgc#`؈2,Yo*1`u }GDx@l ;zu)aq$.]욻M~ IkDbp>ZPk(D`(c dLd>%+OoF6`_㉃_a7zo*XBkuWNt)qs?dh?$Ӡ% \vg&i0E1:i^(L#3 FX{p,v{[t{q{4BGB df*.0ČRۓ^{K>=H/aḁ 4=:'Rd<)5S.xoIXBu ogսݺo^˃}o-z"+ر*WkSUMM͘23,w"{M5-`^RUUmVU.۳y+ (,gy9Lovݨ% zF֙k]QbgbiZl|쵲2aD)X *1j 3ꄷ͒r:2Rv. *̉T )L)LHa"&jn{Ha^!o0P&?vk vF\%r芸J vqWF\!B'WL~ zZqF\GjY:|%K`};YQ샙y;P|&3B# ~0nvF@ROJ_)Sa(y<{_u輼W&TKVxr#߿pYdgŤzJSu\)[F7|X䖘b a B2'rO yr awxIVvABɑ`EÇCVw6$r_46$*a pM\qs-'KuWU^\%*WLGz;!4uU"D-mWJAzqGB~/W\e0Q{#Wi,HL^)qe0QKZ/ +VCJb_q+cvqT пEq0K*YgPglWZ.r*u/ޠUP.)@0(,W &jhJT^\qv\zvF {eĦ> ^T ԲAb:xB]rr4ͯ5]%\8L00{ָVqk:C5&OY ▱٥".JhE~w3mc`Ry3˭":F YV*?c:E^r쬅U $cLK21䅍as)>L\:2ߏj*%%%#f/3}?Æ3-m/k%vn5Y6k)}m ؞ @ZS̏Q󟍺vUnjy̐L-k.PHb1@m`OLj9+j$Z` `V}3j4%Y2rՐgki[v;=;#m߇`5{K{PKn{P$#ޠ{_ =i:SGf|ٻ8,! ]" l ú$яji.LS=C$Eɑ=6,3隙﫮4`k,?O&M5m0dl)FR%~`ӤIѤzNX2gUƋ(%h-jkDA)C>y=ѻަlϕTzfUsg\ ^ ^-wcب̟Zuoo%% do4# )oP"$)' AO3gK%FQs(z(GeW KـQ@c(줺sgt ;ӌBױ//|T_>""NMo| dX=1h RAbH"`Ι7%8}[9UP%{!KbJVMl϶0Лd]3qnq6͗(f_vg= ؛4>V,aJ[%ЎcQ^D/lB6pA{CV0g˚\hcCN2Cfd,:)k2d$I GQ| \tI5:s?lI#L1#vc8xě4GT $C~+L.dt,JqK#zB iTcYP٢,HŀURb&-Ѫ`S#v&{O/.q)ٙ/tA ~q7"ӿ$gc*3/HGPe Etƹ8d򨭏O_| ؕv#mV{-K:i[_27^\ѿHaRݗ[%IbO*پCܐLhR凮7Q͈WlVMzYeMj#ƙMe!$ke؟x){xO7%(%u7Ԟt{%?eBR7(r "t] W*x4Z `.Jr?: yl'mi%H)B/Ut>adRL&Y.f.^aog1KqM{+_NWIƩ)מE!EMq"rk( EIk ړ1%aYԿdHim}aφͨPЌ֬TV`(/.:lnC֐XT> ^ :˦PA阙CʔF+TR/AGjh&Ӑ_FWl,RŤFC~p[ýŎxfGs4ߌއi/F8NjzQGZý('y]RѪMs1EݺԀJȃSq@XI,OhqHBP TP<'2f+n>6NUhlky}1y:~ihH{E^n~Z߯BGG˓ˇX)f"Q.'tx=ZUՋ3'׌&Ѿ]yգ5?!_Vg^'9[_xuƚUVxZ&wWVw{)82Φ?-j"X/wBӗ(ľ N #cWmvsYYްfZo:Ϧ͋^\9>/ox8 bRq>BIn]obGvᥭ:i[k*"x]MˑP`sQ,A5C2bV# "F֡Gz`ƹ4Z7IV6K90^tSEda0,`XdMgA˳Ӿ$0uJ`b]&0mwJ]yvAyoεy<;`Ր0;ɋ@:Lz KI4dR@Β'WHB P)I<8|^U9-L"u1+!՘TI=tȇD,IDxI9uqv4?آes62_ ^bD} 舚j18XllN==~Xk@[(`|pf]G9y8A0FE'kj1 #ʡC]Z\W>=('^?c*D"16>d ZtA\Y;JbQ&化^\~y9K#̷DA#d !Ɏs;b8ihw>l-V`qN[jg5Oj @)"!}J)4kF@e=Sn{(kP(ZG[%(Q')ĚRaa-'BQg{>!{nH7>x`<]^nfWWr~V h3g$8@g>w}tq~2ټӢꄚ'4hSG;f0h:ԋV_QE/ZP ;0"%ze| y D5;&"um0D-m׍Q7jjJK?&"k FH19#}Z)g(FQ U԰1[o:\O!| } 2*Rf_"d`Rε|G'n mE2Q P.:rB] 3q03?G&|Hg| Ii,<!d x"6a@jk] `(I"%vLe<|v(<[ᠨzOl9"ZF:fOǃJR)W(֢2+} eiW;Ogeoڠu }"sj<{(ޮhm>-K9eC3QaH.4ѠF*òy4ffhZ) L%HhmrxE,80'켨i`ɢ俥}6Ր&F!E.v j:vPY^c!!a/l'Y\thx}t?X?~qq6ߟaҢy\i-[0)j.ZɌP,Ye)3lﮈʼn/C/V_`|_)<k)-Rh#eK׮4K޾nz-TxOCol͏g~n{Do|Ѫ4l>]0z>UQ*#sێ#l.^2k`/^~㴹8 |;ph$IdEQƋ '.pw{^^=ʶ^i14^M[Omb ^ER{!9f(0 c@4. Kc3R..2HKea\Z .f EeJS@z#J N,LĹbB_nqfLn!cW\Uʘ|a嘄5Q1Wj ^Ji 7s;'jdZ4g\})=˛kǂ !KF].LE?%SC-P(2~ϦV=J=)OU}aig4TNl'+6نs86^STOjM~Ԃce .E}!gfmw>ְ :|<{xlL%7lF&KQjƯUķ{hJ]1@nAd.ǘ1Td;K~7Dnopɿ*`4yK8;N83b>{ÊcjAb\_ |5Lh{pۨx{NӹL5O ɍ9(R e^8w[ui,*1QED2 e{N,Kf~ĹFnsr*zXxQ:6Ǡ ^Zɹ Jx*t4#NXX?ih~{B }4kOs=!#6T+JGM!ĈiNܐ3SD5 IM#u!z#!$.E=Lڒ18(jimߠ^zjsz=CLSs.`9ϦaZ;kHE>גܩ \e@̲LyCU=z/- y,Gc9yIұVz:cƂ.A^QC`<*uTݚTQz 8hIa-pX1c2b &-(FSҒkZ7\\ }<99Mtɀ0–@a`J\P)Ёru߽_h`#7|Pj~V l ES5ou1u *NP_is*eagePbar;5>|q R&c9Q`k䢌?hQ(c6/q>dT Zh^`" e'PYa} 0/!UOzU%RD*G7H*]yNQQ d[c#A_%5 4:?} ~R@{vV}ಳ e1xj=ue;ڎ h4~-$n( _'i:|}9⽊C(!l \t\'ΫcK㐾VhUڱ.dּVw`b? Ow &-N}h> Gql6Z~.(gs{/'~B~YWg 'g' ҁќw~˧ۊWv8O^!fm?i;IRB`ڍ~pу*υ1g.z"s˹F`| wC*zZ}Hu R"ٖCl>1m ;L6q,%`V =eBh$_“$ N,H5M_4s]rUilXgg=3Ă$qUPb(D(c dLi&+O/F`|4&hE{x[kW-6<6iݽu&KiiFדOR擡L$s;_pabuPFFci p˓j`6ܢI5TJ! ~PGKQdf*F )]`,8 '!]k9=Ȇۯ0f ٜٗzO"cEOqi Tyf ['lp\u  Z{U7t/s xnylWV's͟|c_7BBTws 48HAޯoLث{'Y'Fi"q9_^Y׍ڹ[R󠶥I6n-KآsV?Y+ 9ZYK0X`"TGM,GX5u[Mz&8\NoLbԨ}) s&H„?LÓ# ;/e۪Fjn,ʲ+ʇ1]5'Sa(R9~='1,S)Շi88?Qh1ҧE;WCգf.JdQ@V7|V,l(w7zZqJ~H{ߟ𠅝$q!\[B[WV\j\k#a*a[MV.~_YM}`4|WIy=LG<~n4ξYun05e2F/E=:-s\e U8*((?SψVXt`:CWWcJhj;]%Ttut)A w`Jp]ZI*$ +A]2`!;CW T]Ԫt(1=]] ]I*]`]%KKtRvJ(J ;EWJUQt2vJ(yo ^"]id+8 ]\:c jiR ]_!!gSrn祫2CW{e |=] T|V G@{hh«?)Е2rV|o~0!,KɢDA>?|~ù#M,y/.4Q.>dn5;&vtz&b! ?l:Uz!~ 110+Hax~K9ސk7$D]oH( #Jp% ]Zzjw P2{@R0-:DWSJpygVtutŤKtd+% ]1o1(1/w*1UBKU*=]]"] %%DjvzNst JpsԞЪ Ji0Vd &;U+TW*UtPjҕ֜.92`sԞbJhI &׮]^\!->^a >),W2:1KeѸ ܣӳ >/`\9L0g0ѯ,3! uS<`J#[nu '1bePG2[:gYq.&Y{ɱf5Qs-9˝R 3(KyEQmyMQ+8%2W 1V0#)6gF\TDc1Uߢ<~pϋBљ΋CٶHE#zЩLjܩ- #9VB0]s { BmW4M4P4E!J% ]% wZzP z@bH ;DW 0rJh%i;]%Jtutőƈv`F:CW .̙]B+[]%tut%p]%;={~h o;]%׮.$ALvUKDW*e7JN{@RA:DWRJp ]%Jz@҄1ܥU+;e0UtP^z;t%wzycIj/[.9vZv--O{ЕRrݾ-rHz)!Di- $FD/}k]B2BoǮ4#&=MnA^= ~p%ސОpzi!]%NôZ%NW.(euIJUw*մ+thRm.U+,b^dC@|@sG^KifwmܦDMĮac#lOݺs ]\[+B+GOWrD>JJ+L7CWVJ=v"Ntutev4DWE; "Z2rEm.7btE(i}KC,jnf$I(ݤV|~p]3ֱrmDWGDWVʵbpLtuqquIN \K:o.W៮?wP:g|ݛ_4wof?X 7%r|۝f?ˣ8:t;  Lut۫W{}UOgW%6w[|]pv)9^?f2`<4`?ܿ9웿/]×rBr|Wǭx3I#SgfgpDcOؙĜryqf.培r'qj7j|GE)D/|/3~n9]Z~}h=ַvgb~of Nv$ow´f?'퇖0~(fZMZs﮷-ZkbڼF[Us#+%cMw<u}?lqTe?^zj W[=ʳXi 2.|Nwص}{`!%FηYZw6p/ӇY|{>)}LQ.nmRXz4+*^ÄA~F6'񻫟.i<5.F_ߝ.&ZeԿ4G$sR88|rI ~[L-uΡgWuM=w_қp3ꕸ0x~؝ w9~os1kˍ$znV&zR>\ϞFڙnquu;O$==9y_~raC=C7ӻݚv$ 6oh8\/\/uXPmX&q}=VO OgQ*kf <"bg[Uxzq/n}_C+R^)ݣOqop㝏6i7QviRw_=$kVIJ)G20j +?X2p?VRk?sQH|=&]ѻ_O4/X]bGӃ[bZkZq>?O+_Vʼ𞕯R>lb6#YPQJi7ZmCP~N/ͩFBɞMٝ$ɇ3ǐF1{0'B:4-شpvkF-g/^ff\ZaHhuS?c;ia*3 Ɇ hftE(4!JjSF0.NWRQ̰{ ].gvtE(J'4=1S \L1Hhy?ZNtute5ض: ]Z?zuOtuVk.huj꺜_o=صR&{4٥VKIY$;WYyR /%|.LZǻ٩J(Ihf @pmE7el캡C&pA8F̫`yQ~pifCq+xϥ+ 'EW5CW@+|퉮^B[ ]\Z+B (T %]Rx%+yEY3ϼ0c+B)bxiBCW+oV1>v"VNtut`LҌ"Ǥ`<\ś+~S|;ҕg7}խgEW֎ +vzNgƊCU? zjrd8۝NtW{kVkѬsL˾Z}S {a^LGwrCE.mP~̧m|ه3l3N>0riMvs0#k#Jf*9B{P*&J+9iӆ; ]\[+|tE(JrǵoO<WxgObtE(Jd]"FBW֍J \K ]\Z+BPi1ҕ75np+kU+tEh;]Jo&:BWR6DWسf p="Ntut$g+ly2Hpm3tEh=;]%gӓc+KvfƮhitE(T ~=twz~¤fcW|"C/W~hŁ~(]zo5W;}vǴdpHΔcrK:ց dTSN<^6//niQ6؋vtU@hn NLu\7U`k+뚡+5L%("AHcVNjtlyͷϩ`ePA6p߽{wi?ޞ 8hw+Z^*Og{BۇWΦk.:{/ҏp~~yԈta_{m7W ʧ댄QZD(o-u !7vRxݷ>~_Ü⾏u˃v{݅;Gܬh{l,|#Se,>~G'ּlnN9"24 ӆ\BQmZN9^q 2 s-#.Vy] =b%!|il4X0 Yu21ZsZL*|M1fl^{-+ˑ74g > xܢbR oD<Ɗ0D.Mb*hD!lMڠAN /+9ƽt>n3$티r{}RYIK&-GxyKq1*LiX ;on)Ī!R< RI LZcF1F Բ*B#%G^ZQ\`/#ޅ M`V+0`- iήY@EE ZàC*YepZ(Ϥ  *Dɔɠ-& sk#*ޤ ,).V!Vj*Q3sLNS˲ CDn˴ERg a5FF;+"2TH6WdJRV* R R N-`u:U%*\B\1L lp`P&$ qRq?$X9 J W4~K*0e5&E$`tNeJK;&򬐛!ՠPwbF d\Q v@:'hHBB3u R_7UUN%KUDԭ1 ƒ1 1̄aKRI(J uS؀*mU r V@@'CJ.L9f )t0GRFU \A-y{x)J6􅲎o* Bꮺ2Z6R^v JHK&^5"ID^,.zFw(HHWEQqPBI1$Y,+! /\9$T댉pYȠtj ҂/;twhWVK<棊1ŲD9 !pGyOwv2&^7e1ay>yZ\*MNzDղT8MuHB%@ /mfPæ2Өdpҫh!R0 Ah)SU!0阌: 9nK!R4~aByPr $Pd"e+EJP*>hxqLm0_ ě q+2p/=UTBN~d} DUĝfE T9%$'a O Vxm{{s}7A9TX] ,!VX;KnCzsYbn:EjC$*f7PKTGeu $$5&(},C>h6y*Ze vp :: )N6GM`6Q$D#+F̓@w:D"4(3cv3Xd$ frP%F$ P?AjDP1"+Y'nP\1xXeH!(ƺr",UPk6֜[]܎ٴ bM ɉu#k&]sf$r,[5ձ'Ç|/?@gHZP},DJ Y xZChTRR5׸Xޫ @ځJ0 5Dky. 8:XѠS 1e8P=7߫ |p vN <0fojp9NW Fs" =4'skkQ.#x=BA+H4W0Jk =|P)\v X Q~G= 5X! ++Us <ȸf-J3cD2]( ZV5k$q~sPNp^?tB%w%V90 k 6P v  RbF8LZ 0^= p]@0V|kQ2{=ԚLzJ88 J8AK*t+*^=@p!`~;Xf6RjN:-XE+ff%xxR "CdN]b} Pq)- Fi!kHTr#T8Ø^ n[qu[۽a1 !,qQ ͫ v-g 8|jWF°Xmi+ J-P|Hx>\n]8dX,h0~_ru4oHK#~(G)OfهU@`mޘ2aS3AJt8paOyin+l'M1xܻ|=?l{pRu0—l <+ޅؖu2+=_ 5PwNsP/ >Th*3W: t]t@sH:Fb"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: H>]wHt: @4逎QddtHD: t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: hu@'*GdZm^('1ꀼΐt@"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"Ht: o ٣JF19xt@G %&"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: n'&Knϛ[?9i뾈|2D2pydKXȖ}-!Z#]("1Ȗ>iz9=+lyo誅BWC+@ :B\q/{DWXU 7th$ ?F2x'B ]!ڍ0@ Q: ,V7tp ]!?zQCtutW3+ ]!Z$ u [/tb:]!J0t%vl#np~\ sCWܘ8;Е w{:Fclwf/a}kW ՛!$BEu{ZE }r.e6EϽU6^- ]XGᣓ!%*5}\3i.S~7hU|:^E o|!g.KRY8%+磪 Rg7TzpYR7lQ`t󟎽&a6nK aKAuW /=}W>v5e4KS]B$X5 U)o6]n,ؒX}k::3!_OXf0ĩf3,^-68̱lg$YE5bE7!'a')w+v)a:͗txyǾ\[ AUWt ϳd&yaYZ鹳6*xLP2yڧ`D!N: ".y4"|8i\c,EH30 Zd$xeWMoA./Fx{a[8 < 2`6*%92 rJeV< LErT=Ar<-HFRQrT̳{tQ:YrIYV ǮR&:">T rHYQ|v*qSh)Xxu^C¢3N5lf(̄iCcGcm[w@[K/jgP<;PjsHt-wʗrz/3<4iz)|9~CB띁#<'4I89Ybw=L;|V;w\y97z+#[_@#L |US4X%}*". 4Y\) dcJ`%i|3F;'iV S"{>NjlVd'tA4հ.4e2h$82ib=yxH c\8C&dȁXth$dQ 6jX3RVE"hlv~_ XjUE4]VFϭ 7LeJtJ^.22f@(uZDg9c5*eLD.v&mA*6FbC,n-b5qv8"vŶ+1:͒gEEzōH@/КLċxRl}ٖrqfGe\HNWNŹ]<Xkv=ϳ{0a'2vSڣ?I4#k!ga]Ap}ぢ8 <orB`J^fHUJ&82`4g:M9|{ x%79lUXr7挰.h0pi0 e}6z٠ִ3}< jE1hѥ^fR"LhFᯘR+>ut4{R *Xˣ3[`IqL9뼓(6s6%$ &Η*H_8ƘH3ZHɲw z"Q,TbMĄW}(ˑe6}ZE/i"B:"Mȭ2 )mP*%' +DžM4= 9҉_IN;Ps3c@9ǸK\ /9#ݟi;нwf޶û?/WŐ>?<ܾL`.??W|+>ߟ~e _iIR^&wd^ti&IrAHkXg"@/ÿ:ջәo.KBH0l"][kΖ |,[6kw9Ivanu:qQ:u~:SE-f$dΌ)`^M| Ps_v~@qpKv)jINvy2/߽ ^6 0kQbԧi>7XPCK5oV\ y yPHdNXJc=2FuJ9KESB. \r]$XAJ߭!㤆cxp.6ZÊdf~5@&^J~ѵcOCNLKfYHnB| 3 ^kA"ڐȼ$*vT2+EpI)"1TJV {˦ICƙlJq!Q:  $@E"5Wm̵+Tg;$zl =su6iif^NǶnX-=7?+(h"CAN~i' )۲#zH8`+@OA5j&MPkC=TǝKVpJ(KkI Skk B<v7dUOap?jz6k3_^ah%L<f69ga=&S,fKͦSJ(ChV[ VLD;s%xNEoˁH^Q،#ZzRYԺٮ+qWǓi^wnwyWߔgl5%[+zfy fٻ7tě9Xk,:}\7xQ8#A:&5 *F;&(RF]] ,Y; 62IɘTDQ攬\ͭLFOĚe jc&r6s T)&&"ڬ4CXކ[+7c ~.>rLBlrHNYmnj''$.NpJ8=I}rui *,m0N zك;x53k_p™OI% "HaaLb RtBqW-Κqs5R{B6D1 y^Mv:` tExu"W߫kGPdKnSr{}iv^;!7^vK_0.~n!8֟ _Z7֛9ntlٍ:OZ6rQhn7='=5CZn󼾿;JwceTz;qzUÈF!erj.hkj.Zܵy͍>&A t~~c0Dr;͍4n?EM25l+ɖxc>21 0W8z`GdC fs-!xm8xP{\$4ǔxμ$WLuw͍*yu,mUSdvSue+})=؎$\*=ݔd3W(0/v,vݍWP,e3?Yڲ`UesVǫ6ToSm{7z㷯g/M vs}=P !4hH@JAkPt06`*j[U!(oʳg"48&0 K=4e()>B(jcYšfq4k?l eY3U nU# "ʆyPNo'F]mŵ )/롫i &g22'~|RuZVo/v߭e^r^h-\ЙL+?iV^痫R -x89%(\&y޻<Àv3z'絑G8Yƃ)^h(g-3LC@y3[JMciȹ6*g^4/*)FH~KYZUvMٔqЋkMe4F Grey6#T}l_80lhisS!we֙kg||sy7V>=ܾfM~m#o<,sئs- r}TH?I8f$T}(,fl妦$uZqޜ?Jg\֪-q٦D=m*m͕Na;AΰX*].g>?o7?j N6oFFӁEDozNWZWPcmb޴MYnެoF@~~[Lv _yowi"Z)+Ծy{|-)֎g~PWrnH#KZlF<2tvz^wYak"cΰ,7D m]Y?͛<ųKyFg*ϡ4 θmeک9]h4 T.f,_YF_D//^hM6'g2Ҳѻ=)Jc\-&XWՕe4!4 WuYlU:4^7ʋ@F`\;~[] ݃' lZ1To>ݵ'z_i_YpfWeHGʓm[#/ٺuYgi+6U*zBkxS"r رR]gp0o< [U`BY)o tV@ZVAkfBS ܹUa*1s9VԀyRu`#j.YSł)q]TdiS*gG{ml<=[* C/&(K^B JWEU+QeGOг2B )eULo :l|-q8 D' ff .L `&hv<9,KƓ#ɑcɑJ'7AO82!\`ku.\ =^T?d\MWR+uBBRA2"& *&cٺ"v+R+RHel4W'6\AlE2"'=^ \Z v\Ja3&+ca)-`͓5:\Zgc2jIɺ" x\T*"Ŏ+Rθ8B t+چ*v\N<ȸ: j)YW('vEr+T+WRʌW+c3fW'؍wKb# S+FREg{f\&쎙&m l'.gx3=98gL\(Gdž"H˷h8v8[@rb7Zmv, S`!!\ pdpEjWȌ J:ǬLW(K HTpEjWq5\)L9KǺ"B+Rt"d\MW$+lM2".gPb,zgT5E\YJ W&qIfZW>E\9ڕÑX2"&\ZP U؎˸: T\'+lx2"6gjbSr;v;c2p^ +U?j~*mdK\]-&Zz5 "谗hq;Sc\Uα ѯ!Óz|N,N'ǗAZmЈ0mˠn4<9NZ&qT<9T+YTFdOnJpLgᩕR"v\VItp%•1 HHf*;Hθ"5c 6*\\kR+hJlq46%g[H:;H%()HsZgP\`ɅdQ-?vAqE*ʸ \B"%+ cOT NWT2ZۄpH'J H>Tڼ>E\2 `uErOWV2;Hp0R{/ΌwOyڱv{v j:;;&@N$sI> 9sYz RτB3 <"s({ %c7\#RH T&h7s 0;vd/fT q5A\I F؄pEO9HTpj-~UTV5.$RFHn 6\\ *acTyyFkP +uyZu"2&+cS)]`MgBE\9+ tpErA+TkyKR) .%\`%L2"J+Rkx"#3Ǖ`u`gS)x(r6`O#-T)$Z]qWz+2sbm^.ԘiD]9d;qoN9O_Α0g9O8oŸ(1tv܎Or Kœ#ɑJٓ''>EUpp* H~TZq5A\I4/O HT˸ n}q =B2 c5}8Hed2N+ Ҍ@KO&+k"zTJq5A\0B `uɬ]ڱJT)38E\YpJ kuEre21VqE*uv+`BJH0dprKfԊ׮HWW|ǮgigF/O0KF^06RU?' WUus}1-_k>mA +Z=/Ȟ?9N&k߫V# pGQ<0Zza, M-]  ]  ]m jF ҕ!VyR 8p;hM|t5PiCWLhjG {(JEHW"+>"`o」^h#={^卌%tt LGEW0ՀF] @|t nGV} qb_NWb0wOhyږׇ/˗ܟV-hڋ?FdyP+Q7Ig Ϳ0~j]6U؈r~yz;~_mil9cGݵ=Vj vGT:Otwp0:gu ~iX>Oz}'>wnbvdsMۋ}kjv 97u#7@D4;s7t,|:o>߹ky i~8}3hLƾ. mڲw/|%`i:iNrlIs$c &KUgr93*7QqJV~\yos?l4cE{uAj~~, byzV+R[7{cFW첥aQ+jcNo~GSf ͨ't!bJU1U[rT;dpݫR`(Lu6YmHK˝mAqVPc#=3٤SW|&˭91:DOHnN"TrZ(\ #s#f;^)Ifvhѵh\o^AQMR֖q[E:D!?-@#6Hoś=_* b Zڋ#Q1'Bk?} ԜwI4fU!ZtM%%jH%%QhnbP9\VTp[c[M1ɇ%]DDNE[RHHHI?7 ҋJSh#*K}sU.RHVհdDH,8dclsi=iQUD̬yRL`U} TЇUW)6QO )0f;DrNU.Y{u=5 H6d#oG0Lc*yˌwh|f l")>#(R]<`rT^7Dr!*"pPB='Y,k x^! C9TU54-dPgm% Xܡ;ŌTUűb9"9hQ1&PTؼ*Ca:'D̿hk# ˲lr,k.ϫZZtj!00 ƛAy@y&}?o:@Gֱ*=$]I QUF2bx蘊'8$;Zg:#.c) >@E&rZed^]0P>x!kn 32]c4/%$0AɣN1֔H#38hG*YȩՏUC}^nygUcۆj¨ZaBTD4X!oݷOzg. O! =1Θ}c6A@F4=|NAJ./sNݳ鶣D(ʠv5͚%l-ѧ!)I+y rN1^!B ;h ΪmB[v#h5^P 2Б5;kOEYIb&&UX(ͮɀ!JP CEUM_CYeXJ cF8P#YfR]9Qb5\,ԚϽ4ݾ =lC$Q5j@eV-JP)ʝVU*›> vkלFexBc5al֣;Vlfѓ2Euh0N{B*"bjЭq5H<RVO ]j2dNYkO^3 (TcJ ]\1Ab~pQi8 Yc6ՔWλDL0r( ;fAjr< T$b4u2s't\- 5mT x]1w\`]pjڟ aY_܊~+noֆrl?Zbxy1 Sq?n9?|lrqf-Qe*/}Pjc.ߡ{د鬢h7v]_ry9}?>l^n9O^P]#ŵˍ=]p\~߯|vmX /~:+>źҐz'NVm_oy~himס'~'\o8|si\D&/9qDr#8vywMh'Vœ/v B_cmmܵlmNC:Sa~u{|lv+5GUAՏnNܡOne([JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%vnT1٭ؙ[?G Sy[=ExيJVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[v+VOc\nʎQVOnBFVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[7[A+Lv+`5kDحVGoJnV'v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn!p,>x;Z0k7Z 2NPcr;YѸÀGJ''d}TPHGDWGCWBWm4*zt`g\BWmx{t ]8t|Dt5}4t5ƣ+5@^?WѫC=]q7j7M:y~y@g9:j޸ñHb}@š?>ZOӫf{Z6/^+[yܠ}-:vBr{q.WpWm|_.ώc]5|tr`3s/lU}^=a7{%K}Ǖ=冕lIFWl>ؼ9=?_髙|Xs?cq-]7d2zN"~XT2gEt?}_Et?;-eǥ=Eg]ioF+D/y , %H'CҍF[iYRHݞAEReKdm1f,^N{Y (i)~Cx= :RhjB=1Y˜|8j}J\2uũ,"f<0EdҫS7ܽ֌jJ;s.YK Ei\9O^&KT%ꒅz %szhzu:Z@X?sϩ8'(eJϷ tZz `{#BY-."֧AfEgsNzCWD}>R^ ]1e=X"ZO"S;`BWI"x S{CWgC(hZ>"J!zt%0BJ0q./trvtQj9K޺@^il+hS]Z/TO(-x*Yϓ|7[(Q #JCʄ)s\I[G-_Vs.?gn;8MEۿm|!F|1ZL)&^n]<zɅ/{`ΓYer'#F> q?{,IS$jNPy~yjEStTbVy*gΪc|~5.cT2i-+J@Q{%W|=gPJp CB^|Ŋ/ϒjgo||D6ۭUgkޮiu409[- '2Q, JD"Z|"J՚DU,CW}Vs (9ba/4VU,Do*•/tў>]Jvj3 tV==C$tu`K9tZLCi=@Wz-~ꥴyô^HU,YrH}p\&vҤʌ`I89 f*[GKv> Gc*Yp~Ou8GaE+QZ?%9?ꀏs eZMXonJ{|Wyڔ ?,ƮKqH:--L(sJ։~REz3W5fobCxlE|7f2wTw\ao2 -&{U9ޚb|e$g.rV7.YD|bZQCΓ_n>O5Ql)%[ly<Z,ϑëb[YΛʘo7U;G/M\98fף0P_KC883*E,sJ#68KF7iR:lPia<1V| Fޯtw%? :z6:t1l͟.}kq_j6.xI 颲#nk~4| 9xt CýU_i GpSjY]uC8ZQT[[2hy/kL7CKmxS^M>Zn_7.sp<SH}lI]0eX]ؾ|;.LbۦOA U]*Z x\wI;)4'Y@{w{֚$*>>$JAB+oVl,M'?P0tY|4q_|%#(fhmk>'yRr }W5Ɨ5*2嶰R|(zwR~syXRuG~b@.q=MHs&H䕱i,ݺ-dށ0B1I  "uJ@.`e "YWBixG )S_py>|۫9Kl}.,S{-!hq:aG qd(ɰNyeM*Bo$#O¦|%Z`T.U2"&Z@荢+E5 S>3[b*όMْUWio| bx{;U8HwoǯK#O'ˤZlZIw } R:ҔXIkK!?U$_/QT G-Q:-_#3P6:W .Q[-Ϗ\evjrώbw}:;GI}Ǧ MugB36y o;UmHЬXrv0j77E"n`s[ZQց[g6Z1[FtDfh66Dm!)Su8=FWi%&liV!e )Ӧnv_ޱN| [)moywvC<~{61D"ֱLJu}H2QznҤi>g$NI>D⺈{"q5;TVf=S#M:V=\vw|IR!jdXP5X qw'7c?bĄ9x~>O5SOe}gN0b^`,( cBbfW c Bbs8 aSPS-XpvDyL_?/#ʢi޷&} Og д=fbj`q9Wp! S/226s(b[n)Ss:oℷ=”B k8s!6"f%,Y> ])vRR¡sR"!2,3̙w`6tL hw(/VuJ^ ]DsۭU)FÕ djz#?kx=CdOTf c`38. c5WJ` S<<6#qg' gc{apJ0%%kdIC]yCD@7Ę 60O'庱/nM#wc^OAeȪ`%xXBpRd2qA#1G Ywׁ講bG=Tt!Qknp9p.H[~O=ROyajIbz@¿iwx1Jmd|TQSq_}}%WcN[͢Q!6eb'o^'-˜ryxlLּFP#p"jsy=oCPm.0TFe4] Ȼ!9)7f!P`YS<ݤz͋kܷ.ƥ Ê'` yO q<垩;^9]6D@M'ԬY.Q@9`qin&ͺXÓ&n;YZn~E\c<4nӔr mgQam ^ $փNHqI@:;+ p$^|!/G>y1l)ʒ,x2IKMއϬpzuЫ :3 IM>8ݐ^l6\/GN@wgmc#Ѝ+꤫yu4w8nusUWFr as&X V96%/F A%GA/ X,ļm*m b\ͨJI ;AR,"h +03lhG5JvVPqr!3B"4d =P4ZN39 ~ +e7*gs-V^3B3:sqYaFhNw=z?j,[ٌoA8377s,#ð#|l0zm3^ M lGƯYZ;`հ!p)xzs@iL3\o@z]o7WeqG%8:MY,p@`=񌣑(߯YQRnfuwU*Ӈ0]{w%rCAx{ʲn өfo" 6,RۚacO-Jd'(LDzZwSD Эwk妵kdG^UxUbM$u % 6JW(P֠TR&Kfx`brt,ЀSc =ړ!Lp<%FĹ!gz/.idʄh䗋+qsxS7Wq\|=Ko*x-Ox^&KYc 0>%UDa("Bq5ZZI6S1~Nzͥi{GuL[YR]28!|'\|wR:ld~zak}= UwМV;bzŋ292%)P$X[3a%d]EF iS=MM72$6g'`m@fT(FkNi|+>u:ln2֐gT(Ii 0Ȕ4mb6 C Oc*8Uv ;ٕ17(w6L]^|S?'mMeqVcRVtFQRl|3v"V2yZ9}}}~oi\;B*?rN3\7EAbc1 Lj@FZ൳ZKrI`yB>ۺdBP$M< jqA[OM>9ݷ񪟅u`OCzN'cD3lz:<</ Li;7:֪rhƮAlo~;y Ott_'i}duz {):?wuX U[a?2~z:=s|1丏:?g>͟FrH`ݮaaÜWlHE ڋD\g<snz|9)vZ[p\7&w^^7G]֠:mlopQq,i(69YW^{!4oQYp7sv5mt٧/YUfO5W&6%JN$/K pͭZnq:_7l$a'߾W+<~/ǔy:!"|;-h?36Ck7Zw~7״ƹ3Mw!^Z ΚKa̦Ggq$yq+H YaNɔ{UDf|@$#ZW#wa\zbxmNDM%!*D!$bL"0g3 94Y־V:イ7_jLyMI9 l6l=yDm`<:;9w!Brc"OGTV<;6;Qu/<#z01]5"d/0LSF (EaPgQM$T1k&ΧA߆^_̦tik_8M0 N:Qqjl r D(YB.Uavr% 5Lz2Kْ U֒ '_dVH!(VŢ4^ $qra3hjK.K5&T2G" jQ,AbPiGO)|63`Ed+kfbP}VI`0SI Ǝh;" Wm`|pbЭ|)OR` d꼰KZJD(Dz$W;o;Wrn ɘ @$%1q.H!b[k1 7 s!P^ `}%B,e,0פģ(wMoa[|V}nww]zꐱn &ȾF0؂bzN3R6&# xK]# ȎZOJ_3u'ù^(Srms~|Vuq>5w`P-2tB:u2:X0+7ېŠ{ֻHoSSQ5cH2~  eщ() !o1HHL}k6/TJJ\~YzgD#|xC1W'<# oC rhs׫_;8jy_h.65.4w׹}lh/@'i=/3\h]«yM(L^[2(;P%BWQ)C-gGE6V$fI&xduq0cSmBlӥHE 5:tkZkv\='C˥BX=Ab݅Cwv`A=c<^>Wi=ϔGX^];^>{TOyGE\<)(nqtz; OΖ'Gt;qNZV =c^2+4KVGY2.;OW_؈?QPa +SBק^D]m|WyX|l?ɢ;ɏo?'#?.w3EmNVs|r4_!0y:ɪ夰ۛs|Z>=6NN'G[QbZ|_YNwli_qqgewV}-z.9\TKG.:beU"F&z7=xw`5) kxvI:mHJɜ#LmPjm)>H^r1a'Q$(/|2aR$U0/t &Ym:L&_벇cT@=3)k#U4 HɺfSOAk@Bɲ.9[n@jUOiU݋g#OHQe S% m2EPiCוFuG̢KywmCfn^n.ۗ^oӾ"ݎ3Z/j=9H†/>+UƱXnT{GKW^tkT&h䐳Vb9`TO zbSh+%%QXTz&%$ЗZiu@,Z{fܠC36ӌ}}o n%v|R~ӻ;|<[~8->̦/ceKNZ K:*Q\ H{K0DOQ@[{]P%{))e :v‡ [2o챛is{RٌTP?Vr@aG(rY3)YնRmT0`ҶSXcG8}Ц~oۻMim! 穠Yt*ksn|5h hF't dEUj0|0fcoh*J@uʀZ&F[QQwPITXf1Żk &#oPsFTS[蝪)q׀ 39l|J=c~XZ֠ GQ :ԡ1vniM#гXk4f]7umaCtL8ncMT#BԶ/"՞=lM!jkևXGr =@ol6!$o>zm C W7.Ri,95(t  kpZ*'[]%ӵ],)ux95]7̟YSmUDONgUԂ{ " }zb q*`١*Hn}gA ϔi 5 t/T64|c}ub-/}*blt]'+ۆvJ RϼnLC)L:$Ovo9COڎn[uɒNS!eC9!8܂ǰ|4ajLګKo웉n3i7MGGonT5ü=_ZokLU:Sa]~6.KzB";$G b 0kn)2FKmDztu۴]pGl8vւ6ĺka 8D;D4OBH=zG{#>ηCO-SH㳋GO'^PͼD_nzuo&|,DFCP .Ru& ԩftUWUsGۂ ]WB'ʄh΄0lF[!,2! A6(yml"2MץࠡYjدE@kTզ)QA@2ts #c}eP|ǎz{9sޔ#Z/>>J9QistݽO%5_t[jS[MHC;>DT'g˸*' |t?zK-tu.X^eyv:T[-opvuii=?[Nۻ닛| ?{RWw5тo :yo덡.6.>\]Q/ߑNsѣ{洯x#vD1b\i]WL+!X97FsՀiubJf+|^ TbtŸ(&bZ}g({芁q+ 2*xQ-FW EWLkcbJof]8_]pr)bZ̾3Ȕѕݲ1f0>8(QW(\ E(GۓA;BWj碏Ƈwݨ7E\m=.4\%.SPgǢ&ˋ}W5뢺o#* /- Fʋa&o틎yZnpqWsVU;/Ͽ'֝2%_ak}Uc5*.7~Y|O3_~p~uqys&= NkEUw[:H?U]k:DLX^_{:/79 I/Uˊ] m^W{umpY!|׭7?YlIoLmMX PcDuP}DtFh5cًOB`z i^wB*zU A׶k_!\< Y=?7lץQ _6¿w]%2v5G]yG]Egd']1m>"ʰ]IUزérd(v8\?qqFQʬ3F*]\ѣR:)s\M3e52qғ;JON{0<q'6w]%*Wt5C]QLi= NVuŔjBo7;늀RbtŸ 3ȴSeG} ue i]S($]1pbtŸ1HFuŔZ]PW>h'j슀rbtŸSBGk9PƢ*+6 GWk]1-uŔt+Yi/i dq bҖ'ߏEO9Lq8uqSo:Odp~Y*U,ڽ轱擊[4)+M'}0qc얗Pg֟>//qQ;-)n``bFJSm0od*q|15뾺*mqDG;ը8J[t5G]QL!  a\i9һb+%u O~3̴ 2S(rJ`銀QbtŸRtŴ6 S9+ kcqJӺڙC u8v('b\+&b(KEW3UT9IC,iqvuŔV]}/2j7T`j$x8ݑv'#)3K.lZ[tSG6ҳe$`xdeyb IzyE]:H0dBWJV`ql-j?(GüƋ WLiϽGÔΕ {4Ԙpc,T8xxM8 -qޚ~ A 8'FӌM3S_LM 5mtEj#EWLk'gJEW3n-G[9:+1w]1eEW3ԕIb`/GW]S*w] %AҤ)(N }$Su]9Jjڃ2tVOZ6d$Jj d0zj9`ejgZ^WLxj>Zd3 FWbڙcbXt ,z8Ul\W 26N5>Jb`j碏C.Agg/NrN2B` |m][(8CߩF-c qc7G LK0Ǹ 1b\+FWLCb&@]EWm(HW ]1ntRte)uŔZ]PWh>AuNq J]QWou8]1pTbteٜ[Vg?vŔ]QW: A[1b fh uŔ:]PWFZX+9OWb̲u]GWr!A"`rtŸ(fi}bʍ-EW߱>>Exw;R>fc2rBa_ө #zu_|ogDuy*Ԟ+5?޻ g)ǎC_}~~ۺ9N"]ݭ?Ro_ûtnyus{yMu8Gߡ~;w0(fI7|U}U7a/ԟ2!ꗿש>/g O|ƽVm}.^S;Yk{u%YiHFv\<WCTgGu|~5f[Kuk$ !lJ[Kݶv2Pڷ3}-vEz2_hB1G62KQМ5jk ?#Gy2vTRS1X_s_CJ L4VLqۘ!PX -%$ښ.B_:THnp~>ϑl=Bl Ab89cI0Κi0d),cύS) Ƭƀ*2` #KE ^%4K4ѷ3f8DˊBiy]LyH.Vge(HԦp5 >X;zP6ŅuX ;<Gggx^[r# 2 2 5y`hJـna uhP-c޽JW \Ŷ,^S<$؉qokmΆբRɁk@lNVBMhp}/!G)=C&W(M L!fs؂f,qL|XE#w(!Ȯ̞vBЛ!ՠhx2Pư)0}o;Ɛ !$,( 2**"Qn|QEOM[[k ,,x$tavL\`.I0μKX ~/ &:IB1+C xoSMpVo %ZM=n/X:FJ` 'lAN]!]VI{39SqPl JHOP/gEc:RF~kABb|]u%D&j{%GB=$Y.6d@HLN|{4h64bD ؤƈ)) |f /A;XQ,;fe4'X@QMiD#NXyP;_~0]َE5\?x{U09U=϶]`A̔ZXI| (#KyD3TaˮdI]t>SaZ*3d`ayR@ q?XaABWE`Z hdukVCduc``]dg&D _5QaNPdx$Dg6qS Wd!T?5%;2jΛlldoP$Q a%&_wWd%6L1Fi& ڊ'[е2"YwPK~5GȋxC4*[ckWvV~}S(k]Wۖc)pBr+ƈv(  z=jC [jwX'-eP nj9j8H(xbZhu\Ҭk-)B˚%Ba{7#(g#?>2#bs${8nX0# \;9P./+t:0ё(Z˝:T:X`0l]ڃ,uP3@zπp}zbqh!>nO`E_h"\Tr|p#5r;ynPj ᅓ+UPX+cddWQbFrs3, T=$Xנ X"mL;c6#qǀ臅6Vn5nR;SUjvA %!LF,dn(8kdaӦleȍ7߽Aܕ1>i1z7&aвkN3_,V M F;`ᒣ<)bԸzrGCX1c&Zہg. a~ѧAkRtҔ"j9Yo#. 0p@B; N0*"搂1cQ/9y(nܝ b9~`zR9}~>S/a kKi-.F( q/?}Z`W9VR\lswD 9Qz*,}\nϷ&D^1oyq﫛-^ًdst!_k+z̍}\zyM>{MC׾9qw /)#tSB#N x{^U ڬ'e-pӷ~Zh((>$s@9;hHs@49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@4t9W Z \^ۗ Z=$(C 2eh1iHs@49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@N5de )Ezr@V&$7h瀬i)\̚49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@49 jHn5#yE$l=9 }N;ǚs9SK3iHs@49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@4t*9Oփ,x?y5 6F=uiWo/ODZSl ĖjbK@cKr3$jlw[_9Icat8}_z]=履,#*]}cݞ_ȼM\znWoD_uw86D[x}cO-c$_ vy_?}ra!T@7wuMl>7zɲas|tn?~ip`xprn \e^l#={J^-~^^^=Ch˱]lۓ.&e/Er|RK~%=Z]Rŏ,L uIY4 3ŕ\\$iWfof Dُ06vw2eR>PȮ]flc\j:X1?as}֎@!_2ϼ=<))oĄ]`kj89Ak;9AA :9+G̊J@OWY+SqEt%38%cV5NW#键N\>Zv5t%h9;] JJW'HW>[dpv+] ZNWR$*dg͚ԕv94%pjmNW2:+y]OjJস9CNWt*L&pH+Z h9vAIj?t_9|[]=ozPt\tŏ+Vωl:_@ ~=fu\l(?}r6' ?=y..Ry4\$ɧV?wQ)#)h=)(dߤzpP%vcܙwg伪C?47^mMWD8nki+򑎝%[iTg\j ߘЕ=~U)(N~Etf5tBWևc+A ҕ'iMzԕejޚJWHW<3p+y-t%hӕN*RdvWCWׯ<eJW'HWb6pЕ^OrzJV:EֲYn5[ՕQwCW޵qdٿBmRocwvc2`PS8PmH]b#T[έ[v4:cB$v=v .ynhU7gt:Еɦw;\t6ӭb`JٺCMCjP1:tHEʭIzg·3[ïK=N`ڲw[SzPt5oC0$Nn󬺪u+/uai6 w[Oߌfh/Le2FW޸XQP:qcsn^n<ݔ|Wҿa/Q_4C˝ojg*ƻ~LV~S'?rvISඓwOegtt3Y=gq"n6 Tvsf2n} < -.ݖwo~2ɛsߡ'{\|c_Xsƍ-ܔ9=Ք6M!Y&O%u.9rm Rr6V6wʏ/6̾;aJ|[*@:^hs!BJt>'Vkj5REZW&kqLGC~AvY-ʷЛ{gkC_,-V˶}uszsubZ|S\¾Nn.г>vIŇM9M{.ܜo5Gs}>~h2q6>«PT|яW 4,hr$I O9|N?[S( >gWPwJYq1z6>,s?g o ᬔOVSçST0ITT-u_HqQ\A+, 0VYwF1?_l,/sllbcޣؘ(6>|j˸ГuXKi jS͙Ø+0Z$M]א%+cq;u`uT (sLӨiRf 1' \ ηuD6yRްykN@DZ>J׃/JnJQ oy>fKtEӐdSUQ_k&CNb¤5 eql'gL4H/C9X+aL-`-wkC;;O*zI7AZBv똵Ggt b ,Ń3j/AHaș[A`68&MV1[ۜbٱ5WGhK4n*a@v5Zt*JxaIZUk<$R-d!m+ށ(&l6\1J(:au!&% &b2ICit|58T!i8N9 8ADqY!#221ױ8B󼸹ޓ6".W~}D3u՚'񼼞UMە xn9dBc`$1 dLyG엹1:Ͱ{Ga;Í Yn+4/`.g̪<hJp䡮-2 CЂK4i{v7H<<6b}i󐧶8pSS<//K%%۹ϥ[6B&'$ 34#PZ l̺}η8TDJiUpNWMMuT{֩(Rg*ʓT05xPcǍ>Gu׆yT\,kyX}VhsB$Cqd]T[o:v]ZH0p,CC>Ϧ}l5X2!9}l..\uzHzBSHVY bCI DkytJh1S:e̘`kkM&k#O `p6eHO;ƘHfof' MQ0:j_2`QXrbxAX̦0MϿiˌݦt蚑[?(jsx89 ">"@-zPҷg}3 ׆g# i&:(xd*mA~*^x+筭P*d`VVg\ <2D4⋨]ï҄|9?boQs085_RCQ9BŚ.0켪jU&[srm;x;UiLݫ6׼I{̫WݽGʴ2 њ"ulV:C3D(:u2st7흜Q a:5HdTm^ce|?lQNM\8 >V~\x?G)@eVJGݟt c?OOXJ-e9iz{UmOfkG }\j>)C)˅ݽɇeH>2R[fs:S[@4լ-Ƿt#zU^%UG1P[kU7Hn/RݫgL_Cl 6׼fBA-,BZ?Vo1B4v% U0Ũr+QV˾rBiԠOPk ؝DW%D?]rjw"Ftutep>(]`]Θ'5"tz+6tT1t5/@g۵?]!J@W'HWVipt۾w"BBWV~r'T:Er d6+ "tE(ЕV 2 GnpݑUcA}ꆒL]tz(v7rO5fU 7y%oevc*oN_SF?<6tzʁdCN%Rk?ժ0뗻J _+M-DfR֕2*4T^JYes1IӹscݧV{ GWdXqkZb}+_B4AD%ԈiZi=lX^\=j.9cRHfW=Zr ,-F7c"wC{Qn(t 2 B=vZR7F%N!K+lM1tEp-!z9ER>(bB1кޫ+D[#]]wE]`WF]d/"] ]̖ ػb3vEh?vE(ݠN8]AtE*&GEWO,p0 BbrV ]ZNW Nh+I]!`t1tp5+F]!Zàt(ف^ ]Moǟ 9vzI7ΆiN(JN]lz3'duO'y}4U'XWbr4?Vs4\̕FTTbv=l ^X~ONAeCj>A{4MƐ*sUW*2W\U -BT"@O2K|+wg]ch,*%YhUEk2r{`g1=XVլRJyg*S] W_(8vK")OW\T\Pp!̙z2ߚס:/i"9y)lsނ4jWCWvG3,xt{pNf7ڭJ׳+ہ@WO6}L:Ջ3(f㙎sޠRJܺEK F,I,[J8Bˠa }p  )(#G_B\J-ȾGrҊ!;HNH-uBr+^]!`%QWWCWXP-?y+%ٻ6$WxwF+rW Ӟ'I `? hՅ%Tbdh|YUfUf#(o r ]!Z`Q*JrkO>Dݮ Jo6m+Di)ҕP]!`5%)ҕ*A,? 5G+sB*K@ӡ+#Gt5\BWփ`CDNwf{Wm]5+[SGb-Hulw#]=V? 7e[ցlUoumkIٚU'NO lMS@Y*#GV+!:ᱡl,Nr!⌈\,S4ֈV0A"yL-t9[N+T֓=V&|fOoߎFO >F2+ |COwF/#G?^Y lDyeZH$G"3f|_ɕMRK*KIF<֡L3{@zLaMMbKc[RͼXxڶ{x8 )zLCܧ;\Qo  ]ڽ߮؜eV@WCW\q݂ b+˨/thOWm:] %TxCWys|hyADٶ^BWRi|:mE * ]!ZzBF:ARrE<+l++7[툖2 ҕVڮF7v&AMo3'S4Kx܅Wfc<Ë(_\o&Fjsn"<08!2"EVdJe,hm"0rr `wiyh͑D~˟f ˣvp./V?1-T4 p?>B>pGaALgW!{0L5UCY4+_;0+p. f?(U=NLknozc-)]4BMޯ>QהYu@hJ*M r70HD5$&Tl8헹r`3B&uD-̆W>SpH90[5- A l8O#LIDa9FKB76¥!}HFl#B֟D({ wDW*BWS =b0~<6]N8tD{>ӎ(mR tfVҏM0N}iwr]%lxzema7q7UJ! JQ6Q!L(`3*nEnEro"9)%CGwD)tN0cFp]`qbpxBҶe: ]q (G ZsBV:AR=+|uWtp4m+DB0xt%-=JSBRyCWWy "Z`Q "])˅ɻ4p|7##2l$]i%ײ:xD'p%0e4(y]=E6P]`щ'BnNޜ5UO{}BF7#\zBFG:tCi/GPVդ/- iN5  i[ݐ )GN)gD~"}6_?eێyR[6d3ΈY4e3fD MC.4=VٌʖV4}`Ѯ B*e*Oe1 ++/thi;]!J *>x2#*ﮦm+Ĩx+E>xSB Zo?G,xWHWIGtDjvB&)ҕa\R6H{ ZE P2S+afVBڛADk[( ]g= Gcӕ\y+7HW- ]@W[j/Y䒕*6v`5+hJaNJ2IyӹÅ0̛^.&k eK4%g1#I.-̶M4rsO6ʛHޘF=ZJ"ST( K ]!\e}+DkD QZJ}+|O7{CWB [OW@WHWƈCap0};;?]\t;n<+Vbt0ˉz@GI{4djL"U'|g m/%r>ɋAO)gMS@:~`H{Wy )&(ߊ6w7 ߈pg z oʩkd'wW?f31hd RKU3`;woMK ˳TDt#S5:J٘&D[DϟcI?bs m-Ԟ2^iwEq:-ډgtSRc%*8$,[;Pa.|Bʓ5N%}sގUߋg ՁU>q8:>tڏR5o(W+D,CgJ=,z;ՔLISaۇ{WXAFn|8rkPVE5|Ǹ3}U?_ Yfv@Zpx2!i]c"Jn*QW#qɫG?{g|;a/ݳZgyśeܹw.Y L$xuP LR* R2,% 3L$1бZ9 +kkpABiH>7HmXgT(27+.k+კW1mJUKen"㓙ǡLnFh4?dz7_sѩP[ f? YTg4&YxHa0Hl^0\HR %<, W40Fs_ggJ%̫|t'[p.Twǟ[[bZiKvy@gc/3X8#b]tyޥQCY̮nz[;`09װ'T*v@<(ErƽHL(xKueF,.Irݎr|;@9$J`yӾb$8Š݀0rIzp bTԺX/,zogp,TuztA}-3j6B7k|iz{1|&K!5 @ywTMTA鍣$>9Jadr06c}(@hD"!<$ ̤15ҧq?@XFc+~.h]euĮSm$/hB4$3:`96kH"BҔ3& aITFm5yE laN'N9v}@ۙw %_+;Ga ՝SbKp*&7gJ¾5(Z+*6dtނT?~.~\%&*R#!(ȈYe* Z[:&'vUULu$d3iXsfZ Sǔ猚j0U^$KSÓdmf%Te U݄%;⌷oke.YW,;iVT0Ͷq7I2z8)YJ?7jWJS]Vrro*vᄋ0oj. wz۳qS 7!f!άke"X5| \9—pZp_U[|0֓J>'(a_k<%2{tޘhq/6RY g5KC̻{hMgO o&"5IǏͤ{-%4@OڷP7MrW5EZZ'Z"xsFfĈ7Ւjg3fN2YdOF~N4ڭ^p'l둲QZ5RfGOk 6¼Hcujͧb5[H@ %Ůj2zUE7"%J.l=Nٗi#({V$l a\HU+ULLY=pi9kc4t9ѶPEt?4o׺k<3Jz!)Ԧl~ޖ`z"`fYNwmX;_Fw?,`e{1()qEvT~Pv.-ٔIJ`+&ytss0yMWrd *X!|VN TO BQ`ӑuz>譸[^}b:)q<ch)/x9xd/Žb b9C$_wge}/)2E븊nLЃ@!Ԣ@K#yfHWtz1/oN'JCgw;y*~5Vp U6ECZ[ѡhe|>'X,V'XE͇ -˅+~FV>GfQ?A)e,T/2q8năYNpWnOQ-|yh1i'!cw{,hqC„h' Uil'l9N=0*HE ̳JGm5`- մ[eRԇu DL04O7sŌf #S kE~d>/EWH'5.Dߋ‹#_+4F^|l~*BDVa-(KC8RmD|}.vtC>VeK\/uf.\RY!h587ؖꉚiun%65#&_)-9'JV8pӢY"X%{P`q<{DD'FpYD#haASa+&0g\^ d? ]{=VA9o_=1(Fs(qcm&쒎 Nr^ԁ!Ec. YGh1ʇV F[WY""zZ-[̧94AΛ u%TC"IE'03L.}ZMe_VDU߽HXttqmOYtCއ-?f'UVUs|HG!4tdzw76 Nc!?Rj )8EUm)xmNr b_qE۵λf^Dv`RSYecxB8B*:>xc-kα]NbxBChB(6v/=yFnQ|Td?/?8>y?şn)=m~vӗVyReq (P4Eߚ_=] lw_#s@#̓h !< rR( A ppw]j , =elv/6 @\e)`!r{JjYt+6-'nd6PYPU`ֽ/7{."PDj7 P\kOX*| v[r٣U9nǤɧ_=9hohGdWPر;;+fkcHѣj{/51'D-b҇  TZU DC> ]*Y "uGZ`08'5.35gPa=gN_b[5Uں[TMQPNӊ!ND%tgڱAHX _E Lo!iߡ?5{>W)ʺRmA; ;dah@.ܖ4:.v% P(c^$Y/s0ʤf~e ܗLLVGjSz3c6ҜFrP4hSi& XkI&>^xGXxmХE=Yʙ*o堨2[5%bFLJs~Ce#Ž%ABHHc=u 6XcSbLր`aS}Wsj6bÓ@Eʔ2;EH"rlv ƣ@;0L4}\19VMe_o\N;DjsblAqk^mIpT=C4S?[2JJ ˲DJD9%r?W\enuuͭͧ@![^^(NPtAapX޾`$"_Y)O*@ž\cZH֦pi}avxw$EP5A()X(b"%3pad R('vՖUwGvm-OdT&iʓ哹zm̐gU[.E^P V%8EHnZ&kJx"<bMC[}w/*#ٯ!mY8gl9R;LR2!@F%a>ǪY3E0g`(_굼v~WvJUZ}ai=O{_ABK}Rg^hP3k.If)s)-F`dH2B3Hjn}Fw8w&,EI?J: 3Nf)MGi x$ Ľڥo}.*,"ۥ[ .51rڰX Sֽ5^E2A>D 2Oa:aNfpذ;j?U}^8kֿùyp2z$K2 D /dZP5 JQ( ea;6]fΧ1\Kq(^"`"sb+v=8=3nǢG֊6׿Iœn+Di]5P9ںjeHڱzcRqouO6{Uz8"y.Y9JZx,yk"+on9O`l%[FEm\SvaşB6{ܦۉS; Gǂr" w@)i C%1OkVN> E $zj6=6ϧX.ڭy@›JVWu0ateN攎C^Q?~'gSm?pY{T--z9i=0g9v{ԅ/%LE2\EU۫T @\'lmx=HTC14x"W- n7%z DF;1hm5L H>Cލ&ro[ X8zMAQװ5xvUYËnt51HrL5Qp:ae\챯~IMCaqRQVE6j٨R0 E]`/bM]Ȉ!%$r)BN0{Φ]|]XDY FM[lZ.A],Z]x"e0vM {\rbIzS窽+iWg5'˺d3ڧ;f'd2j5L8?N$>xc؏`8urE]((gs8|Z2nKSs>ofZ5#˔ -z˸0a"1O-0Uh5b2518 wNm;W)ה$ AƩH%jN)qj>X[sE[s6,"ܼ_*0ǝ=U<3wk2)4hȇv鴪nIϽHDH_o4Vhه Wǯ0_+sD^Pz5 /5!}aWѯ/}Q<1EW| >&VJ/7.7tv~YL:KEF9*rC%xw? ΤxwNr{N",'TwHID&nݴ[+fFd@JREռ=?_Ώ"ӧ\29ӛE2)T7QmѨn~7f9xoIN}h{*]d$B|M%wIb;5$a&_]XR@|=¨oW0׭G2;'0 Ka ~y4Z & {u>v켟XdE2Zscq.}QVgvNAe7 Ń!'iujD[RjƷ I92ҸԜ chb# &㳕1_ a!> cM}e+nDy~ZB*.:<C"RW W6`Q_3r[(ɃG 4R8j):lmm-+P9wأ| uB]e>kqTcvmdHRBZfY{ɑ+$q7Y,! ${}:,nYeIc{|R햬"Z,3b=&^-pWL&Ϟ={4GwØŊPq4&v? S"me\K7d|vc٘ĥoP>s(L+z(癴6(#NBxr^Ѽ=M$c^}v9B_'߆c=|B\ܷ$$>ZwZU%pG[&DZL5hKɲnRAōۯ18_!Q'wlK缔35W/a[~i7I3Q\1OdASgSaKX~ ηն-@JFǯC#hbZ.PbAP1G L"X"!6̣te<׶==c%'zDyF\[,b4)JU!gޭw_ YXxȉιPKeZ" N3$^wYYBXO[ZMgu,тq^%* "BpQ v0 a`\Ùc6Qq6u;k@s=c~syOZ97ta3YS9B[i;N w& ?Rl!cC633w Y{i9V%ܽCtSt6Lr5?n3B] CTT!^8yU`Kqh3{.9hhgp, S|8Ⱦ˯8˹>T(`.!.) V^'.ʴZH9t/r 샜]Abbsv! !b`ԙ’ ^ҹ:*xi)NCmV=8%b&k)8٨ S3[Pw ;ayn z1ޘ9*z]r}Hdc &'uC֕&yγШF@$Exn`<}V<>fFq&[63/$MܭĐPԿQ>{f<sj12N\Aly1tOd%(jc/Ȗm^ߖ~YBbqu]=qj]Rݳ>3^@~bP= I_aQAֲƙH՗2eE%P?T45*pĦ2֨K|AX{ա3IK:3<6u0!{h֏urye $[`|Ba#7Ac,cXrp8rZ8úzJw@s{Ziy͞IjÔS8l}~㢢j1}y8Jh5esjʙrLЏ W>Z^^#;w"yjc}EUv&`ҡ>DZ`1)(3eH!S@I2ˢ΋O~BJ^b,Z-9}^ͬCKDX.jIvQR$`e^t>^;Ľk5ɳ 7%I7>a$/4_&ʃ1kS}S̏ۜ1o4ccK}b;Ni*F:gNYͲj&Jt>q'#Bj76x^7^{4L,jԐսoPauXU괃[EӉ8A^ya ڸw(pvJE0&HgMDj#('MdȄ)  D:l_5`2FƂ bkp/ 'lfhٺlJ9EΑHFxss;0ާ?tҋlS{OO,5ub_͖nJ(AeGrDZYRbP{\9CEh?c z f4Uz?a::\7[K4spxsX~0RN?4S:c D>kwU^6ExB40^@j2 Lٝ3"5ݎ#Z3 |&Xu<˄"N  ArN@ 9M£0Kd0%QWhZF`LAhY?Mo[]g Uut4|:EZL]Xd &) e嵎U'$8M&VcKiiHg֔ `=7ytSd#]p@X8z"@9(>&U&yW2D&Q퇖OzMCQnWB,"ZYާ/8Fցgy<M͹U4}\ >&i,˺lR M}.1<3var24!=*$<$s㻧K7v@ܹ:'vxN^t@_Ug` t^uG'a LӏW=j VC1\1)fsZkk1zF R- <^f8fu5_nbPj3/W_\V/UV=/wd0+r^j9qoߪ'v2Vo?؟g(ECCm/?,{~<`}gM%>Bʜ$RꄾVC~匡!(>,6rGݘ Ř-A%`OB=wD~{X8)CF5WeYJ!HgEsXPAO/0ЎBJ3^\ UQ0P- F\ZJg!Z؝N E5-W 6Jnt+@ :bBS5ť:Hg,z1);X S>Z)p<.Sh]E7sj=h-WMm΢D /@`8P=:|Jvg($0pr:L+aKXܪIg25A\YB1I@4Mץ3hKoTY |@In#>μ֎JFd>E)A ȯo{:`,?X9I'/ Wr^2k+ʢtjLA/(/@+X=4 rCTX%q|E$օ,M& A @t ['6;{ͻ=Q9O$3n瓇pF~x D ʉ~ys9Lw ޅ0ş0&!]7=A;3 co'4RK<_W)݇ܤo Iu}\ 欗Lџ5G˟/W՝s,CJz/ӗ y~^p_Re7rU"e%P\s%-kbTJk :YpKp5ewe$=;-i4 agq12>jtuJ-,GhWe|VT'`PtI `[C xy_u 7*򘗼7&̇逮„̄r;r ɲQ(m8:` |S]_Ql9d=ñ&Nya$??A<=0MA*I[e$iL('L$iWٯ|o,ߌbr_ڙIx1hy ayhy^(DqFN1*{"DP@ʲۡpC\k"PA 5dxelrS%R \$V+Of,l# K,тqNy7ɡ qݡoWUԼF=/L }vx"XȔ| A)}4*]B5fOp$Q8mbm~edU@Q"n 4U.k_)Kh5v ),e[҇eV?^j6x2,݃4eȸܳ KS 6Kl*SHIN j֘P8BX emL iM"Cke IK! [`0K ;H0sE;tpj/&Ni Ue/-E-0v1th_RҺ(4fYj܃eMX0PKʓջdBOW0)0-}N]⌀-0>.ьnn8|uprZf+\w obmy1wk{U9kϫ5+6ޏVW# xݧ6m̑6Q/|Ff6npEO[y9:rr|N;hOʔ8UruکI0<0BR?wwn-sf8qq$K!O(D:%:Tg桁j0.6KFdZ(=d:X.9_,ox85^͇.y i?e> 2#Vm>G׃ `한EVz՝)"c!/6Vڗ D>k,@$BʎH,!و/ w&!-IITcqN!Ù6JُvX+owc ՏxÁ rșvZ ͖5:FgP0Fq"+Q^ݭ^g+7tl "d/RA7 LT3@/0n2ڎ$kPY=x|;+Z1EFI"beDQf0eQmSU+Dp򠳡,R)ICu*W.uDw۶Syx#dhD]NK\l$6F!K#D#g;cެ,ӹq_ZyR֥'EL:%F4W h 1٠zJ\492y@nVX_v:nQںx~o/=ϧQP憓wdS&' ;.nNH#|)B+Reh{}/Z+<^&gjMY>vB4lkP<,=D7c!ph{o{ßGAaD1}\_Q?x8UЧ^6mJjϕ]`zk שPA+ُ̟zjCOMS:(Y˴pKS.?B)sث/+^̨wԋ_׃ʜuJ3?qAO 4fS緪b7G{;6 qVq0ev`|mO dpˑv<|h+rd3MaA@ưw,)xGGGGgH Ff"[jaay(h'؟vئD$"rDZa#2Ja~V yĩBٻ yG2#{`dqDF,KV'J$+Gg -S$ OXsi)YSYϖ,^fyƧrK_GrɎM3ܾc{e+%E2I 58},Td eV7G(d"?[&EKm0!-Fqh"8BP[QCW:cgRYx#[GtYBU͓ث<&3%Ey*\!SKc+\k\91Zۧ YEփˡh< eSjEY"TX;)KXk&!N7#D^.=T]Jai]]JXjۇ|g$R\fgLA"3Gvck8f:4M9DL}MRXχUOuQbbȸb«x>K b/i C#{}=~qd[] \ $<" D/g(lle^ !-VMPHwL*H NI^y7;OU?rpҤV; w~ . D !؊ SUAOIjhUIie"y.'ǥh{P CBt\Gzn&bi`7 R"'$")O2EÿigI&S4t"c/o *g=ܩٛI8eH#g1, vUµpdU- hYk2JL("pxb-ED{Zԡ̠9e|[)R'nhae ;Tl5qK'_Q6挖vULTdU(yDX4Fm][= "`|o5>1pbMƢ H^"|} 3f&+D1K(R8sFPеHshb (G{7 eYMMuAU Df9cH+vg /职mDE - z]8$H'X1_`<3J a0 RI-]?18Sd15iGB _D}PV-iCJ.l0]6ej:nߓAQubrW qȸJ ˲ɸ?c, 4`%ۃ D(ݠ9\0P$ȝcex,& \&( G;K'4[zmj=OuZ]JjjJA)'K ,Ӱ92ҫQbbY{eR!<1"Qh!q)^q+F⚶UC}g{w{; Gbzi]7pX|r^&qsb" iHK_)v8[i'rT4G۬"H ." 9zi;uxW!a.kPFTAJUZN~.βjh̕< gY|nLkLTxm[EIvN{NL蚅U72).^ T_q^%KQ7A׭ ƷǕ:j;=_S-#p2-P ʑq IA(.Hv9CW{V̀=w+fI!g<h:Ox$9orÿGIjڤHƅ?Z$LD̕Q!ِH5PBoYe"CrGM)'%?!?PJEg6~UoIzNofU."\z={_](2I"4Nw-OKUӋQiJyߚ '=E)dn*ա[Ao5ɉ%W9D*'Ŭ2&uaQPw+&TF%PѦJ]fR}Ezö.(rK Vu/ æAPs@H+~?gCIJILU_ )6FĢ`8 Ş r+҇1̦ADn-'Wu%LyD3n7{3l v\Q}61gAP1VBr&y"!A $-w9tD99'xAy! qW%qmuI.u[hd |8Igbj=f}S<@Hd.5gLj$a֫ r@Wbk+7лja2!+K: -2EՠjYv7n0"CP ]hH0,27 wr1uA>BU[]ϱlm8 6B2On (x<_~l8MϡUg?_07y#2*5 Z?Ob>A}4|bt7j9$ K.Vn028T"#g]%z:޲$.W&u)ލ YUϊIFW1U3q|E/;!jcg[/3E{'䑎' }l[Lઍ Qz7eHxQsDw#]PDSnIlF u! 5@{}EOR J{hFZQ0n躎\QӉ )xSoPXwAʘٴδc>i*.&:92Q8+i<GuRwkˬ.RLnʸn4q\@7p4\.rβtW7Xx%NybDBZ9snrEBH~iVP< B2lTͳ9t Z1GF;@%AK 9ĩ te妸y! ]s-'GFWsp/)|.]D]sRwg5h#Agi(#(̥"61sR=NwW ڞ=T ' cc:r.e$ʲw^ZYͳ5UOH(n*GF맸uk}s\;|TvA@M56q!م`;墸p-xBGKzIܔo%2@=sd\Y#+0[rJ?-vjcׅYPgkLR]P(&luYa\3a½V%,mIWq&o`N||FU)x!6(VmPe Xjg5xg1^1@rMpMtLL hg=IzPZv2 ^}MX .%ڕj_xNxylAk*o-}?/D i#5C#`u*30aDZQSLs'eV_߂K"q֠;HZG$wҫv PUB%w"V EP Rc9P 3Z18W#gXaxOj|']{QL4說v%=I3[Q3;D?qw4_6O/;Ii>kEsY˲nha>:jrtD{: `21bw-V;^Hϭt4Ą'}It0@7 4mstAٻ8\Wz9gm.diM""B]-ŲhƲEF5i-YDRHh Vf쑩'i$hЙnS5:[E~s@Y B.5.?QCoػZJ $%ڽysςw*?f!<;97/+zREӭCJ"i`. zO}S?5:q>٤{6i?R5DswNk7u~\@܋yݍË^,=)F`TOL E+\R6ޅE:r~?-/lɁё]͟͏ękߞˆGeN~q ۇ>R)Vhg<tvx".5AYGLK!bh1idFTG~nY-SǬ.ۙ̚[ (hGފYY&5=(}w8 v(PvXF6 V{;Pg[Pٌ]湖rQ sڈFȏXs'l\-q{vx0]Tz좥Ro"m׽ZJݐg?pɻGQ? m%Xi+,C%EV1\яLbJ k<4VB@ uUaz3*i샵8:HaL,a\_;[ yUvaoȦfHX?N?99?*htMg'oŊ`v*׼~b= 1NZeq{-F%ʏ<.,<9vroSW4 &WUUۘ]L7!Y!.=X`tGՖ P'˷q,Aֳux?\5!@bL;wٝvw`7.}qgcJ8jCc [{[lr` 0V 5N]ܟR}"=zwF4::$j|a봎e3Z>X$g8bB(fG (Yʲ72Vi!҅1޷u9yU^P 4l[avyMSsrbl{r 4kyvropc>?;?="tԧކ5^S=Vpi>_JےuFa h柗[]Zi+,/f?|K 4=+3q(O.^,Z=3?SϬ{!g>o'0t]Ep>0Yo,.ApwRx9ze5yj'=^ݫ˽Ԩ|H~[ŧo߹c^<F9n:q>[5[rB( =5㖨`*&%sN]%],G%>-b.!Ҩ e UɏIiZs56ZGUl:d[/ 1nBߕojMY 0.Y )&69[9d`3[`-Pɫv}Q^v>*8)Njbj*`@v؎GTtfAg-hhe-J(ccDA(Y1Z2k v0?v*e<֢C#r`$^;>D7`GjcK>eȂpo(p~P.bIgr]'Zx<by`g5$tLKV5b4TdKIMtqI./hal#_sNHT__d\;Hwwz!$4# o#STٞJePwtzl݅C$c!/!o✝@oJTwM:GTPа9 "T r.:]%+c_(٘1,R7~"=C~@k<#vw٪ WpPC sN$&h"IkʛZyTġ lwmZ䥉v'tûomz2\$qTRw>7u~Uڵ֮հv[aXyK.\OgJ靦v6+e;mϩ"s|HNLꠂ{d6.Dr b" :2i%fi9r$aGTeႀ*PXVyꝏs{5^ܰĸ 3DKⵓ[ȡl ^mW7n)k{l37a%9`/B@aL9  `oɌ߶hDا5p-Sc_8^gJxbm3Z<'E:<{c;Y=-J{ްƣk2،;7g[+}{lX؂eκ7nX ݆iWyևTI6BzNv YcY5>-T_ wm-mH}lf׷Pw|eVݑ7oNb :0u w߼ef`S66sUFO<<#vG;wzt\,&0^~C/n^c>L0~H9z;? z=O;M^c7(;n.'Ff#mvaX-j`d1~ۮ/uɠS-MI.*ײ(sS8%gZK4X}H_p쉫'g.N+^Q}Vgso1}V.SxߩѴ '~3=Xڝ` |Yd8SPvSUwL"3@4e(< C'ːE3 :dl9tpm0r;*+sPךGBF#7\OU#9gum&d|d"h?TCq4չd^mRY՚:00`yCK z"iH->K6tAsɆ.~ژx+O(߫t3HSWbIsqE9-sQzC5Y9B`!)^}19SUtsf%$>pa>p;2SdDօvc/ ++z!"SdnfߝI ٛ=9܉T*=ˎi^561yr''~SCƩɚ~%INTNEc`P &kM)#kZZ]lJD.dUe%~ENЅ95CaD֛OtXC葺7}="L]Z&pLڪeTdt˰ŮQSR<'g;3ll*N<{m&gfrf6>g{O%?8mu&p@sZMtJP(ܑPV)ABw7m,MSa#{JTtmPy,û No)$Iӣp\>>miEǵx>;'@0h¨7qĖP؞ 8@0Qo<=X;u?v@H?%dǎnL)#I|O>T ߳sב~ ~MP6jM7SǸKv0oZqY"wےhe-iPSB[7zY`޳8jU} |Yeeⰶ_"R)4u׳L_Pg}L<-yjips6}:duWm+wѸh=P'3S((lP^IgQ*猚< ( J0XY9[鷠'kFn*2w ^\_p Is1yAQͮY4.D>ɀeQzoȻPд -H\)⋵DܦFaۆ;&[[ǶL.}S'Mw mZE-:u¶bJfΎjR@hMꒋr:t̊$J(QwmmIz[׺LfvxM VLK(;N~VKDR|CylJD}OWWW}_uw[9tށ*";峁!qA 3,PIdmkB QԪ12ʼnx2_jU5Q$SM5_kf,;DDht՞)rIK!5qV@P)l!rؚZʣeNzr'Փ;I-'w"˱FRNB܍0W⽂g.ͦRS8KX3}ߡdj]k\(#$1TEZq ˓-;(3=6:N3O?\lJspY}O55C޵K(JyfJ0fXؚ*v7; խkK)Lla G0barvEaR˿/q7s2e׻Ӟ/*K ȑ()']yäH @OY KBRd9`p\ZϺO@bFȈ(9Ql.ǾcE4NȳR,ap@ߦWV+"p^亳/Q/~sqq6?<805_p(qv_q]䳽FOjyу~}]^iSrU[Ktɫbv]zv>A[0[x;= '6Y<:0!Z $l}t)Z@(ג >G-% d׈H+ Qb6%ChqLM&djtcS'u q%[x frCLTlCTucBN\m#Yx+L&hnVp|&5g" >CCwϷ-IUOҬzfՓ4$̓xȴcd )cC699$!Zzɨ2 Q!hM@SoLBV$)hu( eul)5HLzb2L-'&"ᱩHe a T,qX^^5d1(S-*qZ F[Z'(,躋DS nInԧkpELc6dn(A8 *=fY}h:sXL=֫bJLF˃'LpIn%"l ­RԎ/Vk#i 3 B,cfF ӫH<[(FT0m{~aHЦ߱MUxArR|d̲GC ;l|* D_1 $) k(DN l& QhU `T,"EU !q*)`e/ka (lʍcf];aqT(Zw.NtV'Xoa@gv|^3T U/g# FrxJЂ.}kBԖ<*opz9^VT0N/xg&\bdmQ.((x r4//jP/_ޒ}$ٛoR{ƀznWAӾ=1E/Ze{]d۸vbj1db*ĵe<{gCXXd;S5"C7'񢁝i}}sD4pBt?# {N"Qû&O` M,[6.m݇m_N6\Ϙ^R ~-3Xm&|j7X ܄O'|MThm>AᾬF֍Ñ-Hv::욼 ?kX#KEq.&M=M8>F?]ԲSN.~b`v4ck_힌0ؐw~65P{_k)]R #Gs=J# DȬrR&SKdJZ#%J5l3\P@-Uk J,b\p*]8˷Y D&o= }%ѺpUo3S` oQ3тͤ#'HË1]K/Z]+ٽgow8\&8?XC4;󗷺D/Xwkt7-~?ž}j)}\ß^wgjuBCqY7H8Of31"/S.>]f?>9~Be;v5T'Jnȱ 8g[~|A޽g sx 'PVu7+<-T#`o[@?Zn;(xzq6wnČm}8ٜ̌"_IR#)R)Hŀ IAYfHlĝL<h7r'6Ozl{ƫM6`y~ ] ) NwC{xGl(1HFniYa{փ6!<=+~|J؏y ݗ8m\l$vOuuz7ihS횯Q&>ծY/Nbݦ(9wOml͛Ol2slNzlFre(:<i;'/PZ_AԆ!ۓF?%QzS>'7MIj7`5g98ďe{t/m춧sOtj# ;tOsze}/4n=; 2xúmy׭ 7ܳRc̅L@__篺&H@|\ YzcO;-VכC7B(9p򙒌~{`0i썽n7l=譱7bp 8SX*Ypa~pGx}JRԻYxw |zz:D|+4aV?CXZ;S:!!:# SԨ_.M9`:y1WJxHs1]s$yw Ll3S`kO_7= LgG(qLPeVrӥ.@"hnrJHZ[s rHږM >4Y4\ =8/Rfo#?2(J݃pp͘>qfkJ֑g%S[B *KgR9.4q(&C+)ܩ9ʤar2?nVMOкخsԩNuS9_mx9=Q/NtHUܰ jhߖKL$$2s"b1mzCx4k4nS `J da=_ѶHy]bdtJAzt/jBlnZF\ y5\R䠳BJ"6&{R]\j~P~Z"kڠ5sʣolTM`s+(#apEM6n2LiLAMd )،cdf۽׌5"onB7C0y7+yi303: wa9 P!CJ%7w{ZE N&JDkg]`6u4%s:ic2`S Su5qh @)d ~e HKɣx0=d؝UnZsNX_c%#Cz> GqbX%nC- aeX詔Kuɶѐ niU25SbAe^U䜡W&OX蟎+p&R|T+ Zu& Te_q]䳽Ożyym潈jD 7U-&mhj8}| . @CLoߔf7UG i  h r.fe{ VK fWZ*TwK+M6?ƀa ;)Iռ*DLcRCaC?{׶Fde߫@^13߁kײ7!Kr,$m:A&$s2Tk0B8FFFh•7# gn=3dwJ|Py4|L) %O؜x'ӽ@af +MyTh QoQI*r-` .9Mŀ5^=1$8;uł_p.Etjh7CugCIaď/Ƞ=0pMB UEەR?R4AN TlSHSo~78:e* -H2Vc*kDKgcj7ۦ'qnl$ړ 57{H {SL.RH~W@ƘH6k8$g-szF 2_^rOika0a ﷟7 y$kQn L?XLd)s-:`6%J4 'w?)9mLXvuD>Zڏ,F^O@dHµID6[mQLꃷcUud៞D1>$Bne6D[p׹@Ϯ_p[8)`j\۟yI`lTemo%F5!3:hgÐ)2؞Pm0ݱ2P5@ZTΪaPـcW5Rd4cN 31.\SisŠ2n }9! >-Dmr[@gGϰ:D5aȳApB./FVaOy'dU5E[ob!n+cZ#-=6ȒzL[MDfG$l&^G32EvH6Coѭ5g9-9љN7h)Mmoڝ3DK: ǭQ}1 }#Φ#LcWs> (8݂۠~ ۼv;idzQ^;7Q=k_qv;@¼.;6hw++!-9tγc݂۠~CIΡ^n#<m7d>lU9UҪqUiՓyJ#;`Ǝ.;"趜9RG'T,Ǝ3Dg"Fu%fj4A]zf%F(s i 5})pVJY|,~{Bњէiހ`ֆjܰa9j-`JWp bJ2&KъU=:OϮjz*5HB=@n.ѰY*/&xu%w'P:@3.4|l{Q[ns7hMg}N߂ݷKnq^kMΊ %ps&o{|;sDQK"\NK'-lbzj=Tbn (#yq0)ݷ+"uCEjSvކBKk컞m5RýupFJKަή$J1sȩ00&#Y ^D MK"&[9E#?U4\Lj~}u^Ĥ'|I83)$ qU[wjvX0e4a"J)Q&/[|r!}ftl3)$2& s~&a*ę  M_ޱqZ]+UMjU=BZ{gݵP Nb7_q6|}rY~5,m |-ŨdmdKH:/t3<9>knӸrg.9u;rsùҴ(ri'cѾ4`5b&O7sykqj&4Ajfq*I}oP[(l12dsƧV]Ѝȡ0=6U;@~ #ѐ(ᒝoL ݴY?C_L̔ Gf/ t`2ԱgkDv.P5]%Aed;@Z]TZX*sا7o޽L]2e)N2mSqRƣ$j!S:[MFy/&Q`X)^-NzW c833?x[kChV=[?sC>w δg$_-oo:]p&;8@Jk#+Hc4F#53+G1P9޽>q5ͼ{Li$ps 9v )F_{ o GB4P%n!R}4Vs˟\MgQ~nvWvpe4Ӄ~ȽUOJdĴ`44u#m3;~2bw&-ٰmT֨I| ؃fLg E@?_?ۭ҉ܯ}o_?F>럷V u媡B(^< y6!A@# k?!F;B$ȽkQnKЮM$KʞcKnȳYR@{&Hx5FϊS:r.aܼUn'Rk*旙A`5$5yzf1pHSf2ԥ%A d%,RVKczn@>tέP?>6%n?Ec=yel+ H ̾1*Úu <]'-2{~KlS~u*bKN9nIS8(mK )@8~>NUHg+@k.('Or;RQ!WB&[B2a4/&΍4=*U;Մv2F Gz!ibAs0}UMNxMlq*Q7vwaoN#wǡyj_{\.a[Q5USI\& Jט-s_:\Hӹr^>>ΎIf-Z_=xCh_r z_,A7彻gz/7=+#K(YnUv`D"!?&T&LO4SmK>|K͉iLf žo֮gh[ts*Ƀ>mcV˄Da|#ƾ/%=vB2񻛷}FMm_nc]gV1a '}="yAR]4˄Fes(=r'S;t^i L HS]!xOQlkP64-> r6VQ7]g(׭hRa[{}ɷÖr߽;NN6uU|t3tΆ<q<O n? rj;/-o:* t7 1gx;1 N^Nbܿ$θՄrG4d0凗磺 hf'A.\؟ؼ3[[=6ڝF /)bFz0V"v/.bw ޙntxl:dsx|ď'Ȟ ћ9Ŵ fZC^i`.fi,ڍ LKj L 6dx- ӺEZF3M[ /!/g938N#(z{9\ rfN*FCZ[fgf4}E`_́Lfj\hs[gam^]Iezȋ DNmeT%]o{z#98$`ǖc={;vrˌ1#F[kA)pcrϏ{zY[%iM 1-{n9VOxo_jJYlp-sZ5#go_\&߾e|ſq)?}rtُ7)[_yꝁ}k"Lgewل.r~>D~6K N3zy/WezD2$ ;N1w!fpumdG9XvBZ'+J/oDA@j OuRѣǥG^:SG?90 7CTs5z4<935Fо8m9XT?Q>)#c λ>Sߧqf-eL#K4 CICcM 4h1HXEI)G8[Aޛ~kƁsV.j+8)YSș@Xl1FH2hBxH8]U;~GL}xH>duVo_tk)6< aS_Q'yrnmB* d/.{L]s]|](=sOFHƶzt'sMSU{09ҁFCwJQ`:SOm7l #q$ƚ d>PhΡd̎{9p]|wϾ;}8&^r-^,T:&:* \S8@tQSԜhN e(8DңSJlL|= }X&D!wDgpU+^Tv翟ݯ|>~MDZtip ;KJ[4@4:oalQ}_ #CH!I 3H^_lB sq=Ɠ, rO}*y]IN1^7l~8GSEodۧh'Ʉʆ]2W%Cj^q'M;S2XIYs lƑ"]OJ10prfd<=pre\ =c591&I'G,sD#J)Ť]V` @e2]%˸j!K:l+m1S • DgjoAy#9#d%+2h\Eg'F茓5Yƫ9  ״)!CKqs [h{M^ԣė*}W1qXJr{Q0AEXjh8J9l԰SI8)t4gc % =K*8zBG[jk|M`٠]cJKh lRLEDvӣ;qe#ʱ2 ]PHd8N \ ~t&d\43硸^6gWb2Mp# y!.xZ5=ޕW5]w?IjĆF`&ebA8v>|=Lw7oskÖeA&ݑȚvPĵ ~t9Ko;a׻1v%CJ=on~o<)hFPTNWJ1ACHV.^!PPs>><3w2>ٷgw47|!M.D2俇=OF&pFT;FKI3n>@>0s6BmAs$bQ#FJg8cm.ᶼՐ q&*`y߼ 7DF^v428q P*Tu>׏k %N;\MU\d88=R{;9#_Z#:@[d AJmr\ayϦ0fIH`hqiT~8;inSE"m6u9ؔ*0ST"yM] rƦ*I)7iwG8) n*׆?py ؊›ژ!["k}{ݷ!/&7rsN{8<"4.m3~,niC^!3*l6An)ܻ0 n' D %\ޤ&$u+Z2%7\>>wZϧnC^L$&Q(Dp~0̜ѓ,ȓѪ9ei<󜆩QibDx3!ϭsi?TP`. ;dSN/=Yp z4-}gjhl86T֟em T;W a&P~14g[:$C:ò䰨3^vӓ:33jFN9Tr6STo {o&"3G47;n9s,z\b^B#UlU{e o9r= x0"7q@[s(jaZZ穤uL|K>LZO3] vu;i{ϝ3#?c6ά~ Qjq[\_@zZ[zOp _rBuaW8A> {kAniZ|8axys&^G#9ɀ=jiqF܈qfN͊6"HS!^)4#B :#>pF2g[p>N;=ڍ[l] \@@WpG ]̜B аj<ԛ<Y-hy/:Hvڴ-ַGjċ,†Sk1nNIbÃk@^?}"1*z$|VG㧍HO#ϓ<=k"l_sedc`µ<{]^)fy{sZ=u]^Ҏ.ouAUIYI1k;~8{%i4{[W=躏g_Nߝwy.\ ?,PC7@mْk{yc7`{?'] =YbwZD ,Z|uݟBYShpZ]5t+f,C^E~w _/9οǺ*R 㛋/!4Ўq]^${}ܷ ;:!}qN&~MKYݢ1R^ȿ}Kmn\n(?S_׏#~P[@eŘjofmyƚU5^"ZD[5Vޥs۟DKigk)/{ɕ.9$CDNmyƼ.~)2sgDa<@myh{U5SnBy_czW{}FNJFE[;- -Bk:3?VG cp-hy_6OeJz n^'d-K۲sލBӒ{-ˆurYz6}Elot!Zt^3_V ę3cH=of:|{W fGZ`fr~?!iG1hHWZ6\Z$z4b1߼^;^W.UǭW!G[1N1+kڜj# 0P+-kF+J?+08جOy A^ v{B+ܑ:ׄxUu6)9 5dwNRZ$z"oԹvq̽͡GOd 7ݐ .k'j; .$o 9{U7|^ ʨDMv5hDZu7&2Ğg+6:iЮA9[{*ԶJBmOP9:q覭ߦXBm_Oo _ͮz^mT]M}N}_zS/9yVPw=D Hٻ߶q~(Ꮩ!Y- R_8k;vPv9يfQ%K p&7ra0|w[1+yWYQhe Sxʨ %ق!YGA!3RI#2CRB Y PJ"6+JJKT-/QZ ER42SV"eF{?]wA$ObG1ϸFx.3ya-ob}2ihba27cTэhB9HNahh{w|S 9S*?%jgFPMSHd.RЧ0pq3P(n<@A4l4I8H 4 ";rjj[m%51E'b"H>g \<[Iv^g!V<[P#P#V6Gug"/HhEH01wȱGFKt#8c (0Q}aji6kE$1˪gK&?5XPиF *=WJeefc$2fF嵬:|jUWLGװ;+-8L?M $U[R<̰"?7 Y PaPd*,$U);l5wyO+3`1Wʾ|:OǓ*cS*n_|4ӿI^NEa͙>rj=JoEOKi,Xi:[~iP{DP몃zdT P +scamE +{j|( 鐺>H> f~Y3.;;;⪣n}&vG8H [iu{=b߷]Nr U?{+;{$mbܣ cedKeWzUjA P86ZqAiZeE0ix7򫏾6.G+8x%1,Qח-3/ZY'/L`,}/ 3OhRH:y;.{ո7rBea1".,fRKp6o.GwMvI ɺ V"})p*!:VrǭI~[ 򱙲,;bq[Q3HwT=)k<˧S% V܀EټW2ef|w^d땬]|PQ/huR/utR  y\3%͙87@/!OQ:-oBٰXiQ΂uݷK}J+eȨWh-ebu:%k p.w]ֈ:9o޺uB'$td$i)޺v|z~knKЂNGl8tcvdNe-V O|19<%'k<96LAv9)|3'Z44G|4s%_tb>ֹ=GFpK8T|47...ݴII`i%$"D*J]&5x/|lxc=;E4AGtRQӚG%ġuRVVE*Bk;?0( ZhGX7_K0Gy8@)gD׈$At8EnbC΃ۈ;!":7|H^( Z[OZ+oН4IMpJSc-pd&[qkt3Z9̓Dp/b<_*ɌeO9{t} sۑv+67v-VWO?lh`ۯsoU{?M&狖 9ͯF0)5x˲+*"0,^:fW~TRGI96xks@e,/OCi V1& /Hoj\Q6}? :mϒekGb ~"]E/Z~XIt_3k4O&O{Ϙ_n't]<V] oǼ?Sޱ9 GFΤ uYhyNy?j‚kIkykg sbZC9hq:C`hV9 RVg;yĶӗ{n.NK v ܌Nug<_n}l^S{{QB~_fP|f u2\%XKGM@]r_p!XLeRH(%f.)̃LawG>,_ySҜs-W*Qd:E6){,/\ 93w-Cw+SH%pT5,kӵvnT{vRԆw|#Rt`o'uJ/Y#?;_&p/y%w?+EB Υe"Ob]_zmwlNXURk5RUK3$Th) [(+N'Ϻg7n^X!K) xHӛq`ܠYƦhx‰pf1X8)vp65hJ "L7PdEHK*<$d)JO<E.\"ɒ0q'b;[é+w^R/][9 {C0ɏuRamӘΰe-!<-@()Pbo1G1Nщ՝͎dk;D;ZSrM)gEDѱI+)P$<аeUq^ }ynT?Ao3Na ww7 w̪|>ٟ^ݼ6Pgh:/?ގnjo|U&߁o/SLBiQ\;4mA#<{qؒՀ UP Q̕Nv˧ bV}Pr>XQ!PdZj, c(Z 4Z xC9wtjɈqs_{r1$?}lFߑ{b=a xؘAQ*D&}ǠϑPޅcOZ.8$&* D|~` "ȋ(4/`{1r[ NQv\-}\S = 鲪E?s(9|+777/ʔW90}޽(Imŝqٱ@J~΂{:OaMr)%3hޏܧjs6j갂"[9eµ0;61\dv͉UqX𤘝@[1'26C eUdvX$12uӌjFڵA\ϫ+YT:Y_+n^GH:;gSڨF%n}&`ȣ#"u>=L΄ { ׬yW)=Bث1@A9Cah TR!V&pKt(8!c` | V?_nzGSPhK$D6W.9 s[8YA^)0|&i o]H^Y 6r>n!FT;bhQ)mcʇԙegY;Q9\S2Sf(bȧʛ"'OEsLTY'dk?d1G"w*$1ivJs@=6R j =))>q?k%77jiR؇ݻ JS 馉AEޛmȯz,>9YZ֋%ai~.BshE={sWi?]ufitz~UY#?UsI?jW0VK3r(_x(c^ iZ0O՞i/:KO_$k~m49',36N @"ZQ"I;޴HB*>dFf٭_@|b ;Wh h-Uo)7p(C y8[A"N"t& #S!\r"ւ+N]z [*j#Q:hQaV kQ"B.41X Q*%sSMXZCDzq#v9,#J)@DK5_D,NZ)GrbAYGqDeiaHQ"Lgc1]it9/f :?> ɔE)w@u.## PQ5rzS>6QH#g4ysR+9%:i6(I^OLs.A9ǐiU{i\dXϣ+3,]@들_|r{>ggaIflY/[e7vm9 v s4z.-yaƝLUy}$R{Ů5YSjT[]jǾ WLP}`¹ FB[JUj+?n5 a>%U@Zuw7Z"ffrCgz\8sE-1LOOsSzGs3,%l߃}y tG_F"o\&,Sc:hrt wS*^y 4w)WCCfR`sò .W8ŲN2};PR~['x.r Kg",EeKpUdx(5RrA|h˲\m-[ ddjTV W/3*’{6o\{Tl=u8Y4{A˥8y͉}[1[/nͰ!r1l>){b2pcu‡%Â,[5?w¸HQ8IybVe-xP얭eJKy7HZ2NoPt Yby,_Ԅ# 3b\m;)<8UDTМGE!ٞΦl] hzDlOHUBPL_3Pf^iZFs zlbg (W ŏ.(2z3$>?8.4)Zj"IՁ)}t7c,i<9~],u8bQ]-txcanaAX~^֫TzZemP2MEena%;|Jf\߿/|*+ q[ -8 932i`Knd4| m=KXrW'#IOK#58mAarvT]i kMC}Ltzvg9-<[)*$ I5ψSd䬚"ֽ4gH#vxSFJj͒%7%ɩ~ɣWf\4Fr?Kكi1FIƳuykFMb^A uLĚn9Yjk0\gmCs3D8Hbh0F{UTX "Z5@H 4h9"$SB#Yy~,Wh 柭 d!\)F)t{b16QeFcJ, AL#\STMPuE-)?g;*q]%ĝwUΫ;#UNDG"RR&v/2XL GDᨉqŀ+k$vƷ}Q+g|ۗoRފbZiK DAT F3j0,1 +{f"Tʢhj'7[BU U]ES!!QmbmPRe/֔[Y'IN8NC>u6J !̥*P `D[X< 2*RrZ^*{QrޣQVmébqjم+ 9'ofʗb Zsu0 qx;')ֽlk1]gKbɿ0:MTIm7 knc 76ڛWNDžk6yo–dM@ND${%UpF B6tƒ}+K%sx*:@KW7@,z(f{jd22vrA-Ö~TC2 C| +8Z͔B+81礞<6lqKࡏ?n 4|cQSH%3^HT` }#H9h: AɣbR ( m44X)4'A:0|^ŨchE\B X8u~ bvIOq(;0GLcx\PY>Ph q vx B x0 XIM0b9-ؘR :It` lgna&?B][([̡{ 2su4a+so"O/0?!8@CWwq3f+ Wnџ-ۇIJ,[ʲm4T-}l6NP*\ē`Ɔb8sU9ʜܸ~E5ARSpNvH<& LGAèt 8[P 0 y{oŭQm1ytUTA2:_{ 7_^r$%vPUKAP׽Q@ƥ6aB1JaY$s ̹f5`m)$x#9`W+~lzϰs\ŚwLC9m"yilgZ1p^h ߪ%Y}L m/ kRkseޛ`n<\^EXPMpڪ&QyIT:? _|}jp?f:U+?/fpO]Kn|W?wICe2ɮ%FVO]|HlՓ[cXt:Mv"*#emT+3l58v­ثŲ;0DrNꇰIM9I+'uy1=;7NEjixT~Ez^ aP"63O2H[MH.pQX#ɒ Փl~_Lw|q5m@3}3_E{m>8y7w`Cٓ ^/Gh&%kmW{]D iۢ(*c{m%m%?dN7.%Zo83$3ݓ qOh36rO*/ {RwbpO$~@Z`Sݬ>{OBUWO͠KKŋ6ư cP/o}qIwcAKFJ?2U%.y1kL]}4ÜuUGۇn=*wp ݑNKNHnLE@3,²;V-DBTk^μ ,Hd%MKf^, Yc)T*󂰥J;TLZ9+:Gb(?+9c! xFHxQ]]70-oښ4pA{=;w1U ѝ.Ju"I,+[l @>ά~ >LyKf줷$X~ .yY9jDUjl2.NOML%:&i43BijRkS?/kZ*x*EX7zΨj,]lLq<8~渚f:sL"]'%V]rao”W ītJjUh0fߺ^0`K,:0B1 ԹM"E(H$ w'+֡UM`7dIYSe$&QZEЌ{%u${a0Mv8ۍ\3_]( d7A6A)!F#HccbS-byG7P&-"H|9{d}X|Q v5: pKsq&};l׾EMY(M#&%`)vqPq3Ð|DF!& jD8.e(T+Ω6)s)vIl740qS`bḬNR0pbPGaqf pJ&D;L C"(Rʍ  i4r9k1S瘥 R Pxغ>{R{<Y5yՉrHʟU^O<$ZK̨G*F`JԖ!&8e;b,/_|؀m {)RR*Fד?ԦIYsK:_"e89ޥZiӻq|UR՘5`t,Md/~T"cJ4^{/v!LBD4ΕXny fصn{m OV=VZN 2;SzR׀2AI7`듩޵OpYC;ˉ\(y$]l.gt|% ɷz|efʜOSWVWN Y;쇹xWH֋7gf]I`J)[b5"ηV[(eh2eL1蚆ԪQAjE7=@v ;mIPa=HJWJOqmiSN!).ZZjLcB[$DM(j=Rބ=}9а\6Q0T{\+pőcA*(_"P1 7<}Ƞ>~rlnP;I90rKMHCCCuwC~ZnXŔvWt̟qϊ_>R͐RGh1*D4QkƙM{re׀m=rE5c%-c #eUapmX㧋R;p?a 'WBqwo3>’uܣXGcYtiw_|!tXЪ6- G+lxVPEPU{$ * u8$..} ֝h)~%D%!{/Tw{'k:koVyǩ`ݍGNߌ ֙E·nt31m[NJy\k渆 B' vM=v:l,L5Ǝm*xo+)\~hѽ2^۸ν2A"0ʇb(Xɽ/EWj+-lvW&GaBH=2"AZ.p\hnVn q2HK9ݵ \=|kv{cSOzp!S,*cjj$f*c|\3IpSJز;ى+1 =gxXr*cMiћ իBJ9hI|&sDćrG|O-G#z4~CꉱW>tQw)*hl$ЛffgSUr6z_J?*HB /\>a1\~z]ż[|"_ALzt|~.urk8uz._V&k'>B43Q2&mz]? ] ԵXhfl3|0]K] t<ԝ:UՠDtyl}s1z8nB9"c6)1XHюw,{9cG{.=a\HeFSa}oЇ$lnl}#^GZqƘ<0|pa&8~G+}?T\awG= PR)DZt/D=~D`%tρyGG~*Wfar|F"(T rɽGzh@b].\4§*Jb4e;;_\^E .>'ؓpR^"%%zp|Oěr`ͷt $L02Ӊ=0# Y FܿegX5S0kW.N?~] \si0d wį-*CӜZdWlf瞳8}>}Όclzg.Y\=i:ߗw}8 da0퟾O3χ%OUW3o]fŧ:׃lup./fp__=7A%O}ף~j3w+Nxb.r_? w_~O$x:ܺs}=u)9qg+WfS> aoG_߿N_Ex_}˫KGz04Yt>q^y\=ߌ||}Y@۸x.(߾>O`08ի@śK _n>~ nFgc-S\xvaoAG3^<~5:z.ő՘/fR#K|Xv9esG(wI7,RS2}\9fzX߉H)l)bidēTcZ?}o|ujaI|!l`~(}ؼ6`2os<98OrD ޟlai]n}.;aIηK*4DX+o=/ v]H!Sz"^y*|"G/Eȝ^]>5Iv8 ù#Z1d%BZXIE8+9:+Gr|yĆb.ӪT{l/ iq&pݿ-A>G_ ɟ @< XRZ/U%VÚ]Oqvu]ZgWqv}YO&\q1IĬQ8Vqb"XBc[\U*=>Yy!Rn@#5^*;BIMR`4NSb!c 7,IKUHGh}*=GK.&ߣ[WmlwX.b(DQui`Q*o.;+7D\#dP!ubŤtSz?.hKrFt0Űcc:͘ӉQ$HR#IP 8IS(sSS(>_8\Bl{ 39 K8HtTΊ \J7hP2E @LA9U5q(10EdV +ZknYqcqk웩؟dN2оY6韟mg}%:tf]?2;魷ulfVnq˳FL.U]#/#Gwxv`meASZpjDL$RTɈ2JR .Jxdf9X,~h2[SYE,E(R0#. ےɿ+NkK:$ :jRx$Y1@K+b0ڕبa=l(y%΍:dGMEwHFx 7 Ny!G4疗vkP)rM3rWn5=ώF2d\5c^;)5d1gYee] "Evc(1 S\M˜ɹ췀 GN M5P844dVR8e@x%](J)AځzM(wY J6xsup=Vqpgמv5`f3Y?fDs3p*H”Y.h/49ȚYuDe hΥ)|BWW:|+ cFьh;Z3$vQg4[IQym:Ws6JIQLZ,k4lߦϏLq)_X6FY=}嗆  ~z{Wyz'G_4[z Ϟ@/:>w+a?W3~X[Kww'wnVp}M6m0 gWBaD[дs~:;W86ƨ&fV|dӨoƣmptazX|$لP F):gu6s@6n/# ʔ7wY?NZG[|!$oȋ,f:DztʰlO(Zk ͍'Pp?my'jf\; !1&H P4vr(e|;G [Ed/@3ƙ<0!.[K=VL"Ū +|ᨨ UL (ZG%<Mҁg c9*.u2ϴ:L~|oK%{a˴$eZg˴6Hq~;ݬ L&vuv,uvB1^qdEH/n.Z1%2f07l"v| ;M9|zMhT]0a;D^Lw >Nm}-`Uі},SYhsqCC쩄"v'knP.*XzoUF-fc%c+n Iq3VՓՠg5-?BkmY\'9(}ǨD]-1/߃s__lv>^'.v1z_#RwoPV}V>BW8S=9f2S+@;W*MX6R7 )驉w^e-4+투=$m6;ߏn~?THr ?0ٚ1poqQ) l"oz[ a|ur_oW?=9z- z@HulZ<]U˿_OBIVTNV5?= FP[5oUkܽ*LkPj'95f@; vJB;NdH5Vk5FJ_XFӣhĺٱ}hlB- u_u0,3%Q.TmB7=OZ>iԣl5S"!4eC>lɳL}Vi^M'ʧ'%ʷ t \h.l`YX2\My^ -jKb# 50_)a}9t/zЮ t,GTN쒳2g3JḒ9{ Ҽ<{q P-{B #4ۚ_ɵd_>0d 9)@'VH"Y,X`GӬ1pai ZDd\sd| ڞ89zZU: yp"/=(|zt-6/r|j89uЊ?'ǧ+ -!J`tNemPwX   l sLG=k IL8*ӥpYxp{v3XwbVY_U2,ԐCX}{ᅶJcƕxW(^@,R xӎlC=P($?:,,y0.qJVĂ7Y-7_3[E /͕Pt&BFvhJ ˒L{%%7̔u1 kGNk'bovތvIt=W?:6هqPUvc^%0SkP3ZAKqzם- X7PV_7^ Ds QL+(&F1euSqZj#韺}D4bj@a[]l×:YH#>dV9&璓0ř+%L!bo,ZW ^z ά@]tO?"P CLQ U)34Mi* 4{Q$>97GOk55s9ќl-iv/z!כZs&'0QP Kq _hA3&M` m;Hn|bu2i.6hX`.M,}<}_mdX):?(Tg7}:dEyY5Dهx~G}`۳r~sEc<,^]WkC7 v< C&w]x/<]kHw>on{l6Ұܹ,Dӗ/w I5jC![nm>HТ8mȻ}ZϷDjaX m.weof 5plV0S4=(40>?UbpнH6||扥rhRKx%_"=ɝSk3;dsrE,&RƈE_ ٽUJʷk4G!ImihoVeJu7$5dY}Y{ҔUE|J+}$mƛףpcG[`[)>fOh/50~uu}%돂M@<}|jM?i9NAiߪxœ" h. )\SeN 0'_kdU(*QiWEJ8pSd`  @,˫'cDL(zNvOdVK?ŋ!Jq%:fY`|tHXAK]BXϮx OC8Srbͮ/3 Y.IIj)R 5KLq*LrxVzgJ(䐖ӎ<: x|iteAK7` U8 >7L4v1vdD+`R` Y+pSp!PFȁSLl\Y%R2ka<9NPf *ID+EJ؍T'nKd lib`hz|b/kWVZ$9.4rN.g~m Ί-RЪ&rW,x sVhA>\ :uD)%$TnUJT_b49%H_X 1xڙJ_@k ̽ kF>o_U13T ^"vH|6uu@(|^ w_q_/0d+Ƚϸu2S<,ׁ .dܥ`(.^Y|)9!dE^v(ض i ymi@u1U`atk9TXf ^Z>HQB"R,Z/< e!ȮJQIFַ4\k$Q0칊5/ l O +) 5cHM, *86SoߪaU>kCB > (4 a(-V=5 ]D1Z*0V[-LO13s{ݷ5'l)El'j7=SE:m iZI%f4-N;&@(q&7+@Op0lwlb34lJg@Yl#GZ m-hkwOic5<X3BٞJZgפvjU /l$7 5CƝSkƲLpTm4xE}'7PY6p¿v8hp&e37ۦL]NU } xU&2hJo8)ғV&6&6&6&6X{xr0A(<伐}>*/@.K'uP'zR׏TmTMxuqL4~D)=AR ׺P@ЋCm#ƴ1RZj%@VuKgW$Vߥܬǣ߫/=^ZX>/W`J_~ayԮc'/vzBrM J]9=eDQv"LtNGr5yu_{魷1)"dh&x8-HX'=܄S)G!bZ0ݲL/] t+\Qo|5rѥ9W$匱|Q6 I A2Gl  ݥ 0h<6W ayHUeVWS~5p 尀4;:ZRx؄ePzs(bN3MBb]rɱ X 8)l&jx] )g֪V]o0B(A1,-:uƙjJȖK# #!PάUy]I|q{ثv N?ٻ\Mꭿę@7_'÷OTG͂)\C\2YrxF_\^O]a]VA}f[~yr8궂%%I-yaM0QsZab܌cB9+XR)7%J3 ^FRJ]VKVWJWK$ l­]QܳoVYlox5XПy߾Y8xV wdi|wՏ |/W4uUg}mU[~О?(GX=iWY:%GU֍ | -);FvHdh-Tև|*SĹmt-);Fv,e衭[6֭ Ut]pbBBznvLT{74WV(Ӽb^0[>lhw` kuD7]oEa]irJjN֪|5 [1r`WVBrl+u0Mu%UZBKECFzg2ƁU#lXLq L g%\[Z=tQJj m(xO=`WBQa1Ϲ,=ixƊjSzVڛ 5cAmk?*_8Lbݾclfѝbq`=J@HrCBVh/.B##q atK"(1:vkIHMnoټº}Z -y-r"DզFpÂ2.h, -*b#J*9ڝD%A9/` 48$`HIG ԛB%1noAD.+7u()ciԋclިCXtZX_-r:C832b>U쀝ժU8 SB?CO1Wk6u:h*q*%A"Jvk! `E0F!, 1|餃 ժ,Ѽv{ɦ;q0,U+x=IYLej(!tBOakC%.&jMz @0CF&a[$V׃HF͑w͡h!$R_)Y_`$!nqx2K* u'aH<.g EC "(kf:/GeIr:tFwӄ)? վN0_?%j|[u3`n_b1)_c'=ӎZx.+9b{Xkc-vLMR=B%HV%_wtw>i]1sW!:4jMMmzp[5-qkn­ٚS5l\s"lg!X --_ -Fǡ"-ȑ-oVZk\~tG,Q\T:4SVSБSBh̯?hH1 \;Iac6cJؼ;8FJ9b[3G. )ż0\ۂĕ@T8şؚٿU{X;ҫ\zKv/g*2Xj}m$q GzSWc |}&? & ;(cru@](al=4zȱ6Sǂz$>o% L Flw!ѥ{=;j'QIx~,ӣ_R $XLeq]8 %m&P+e'XތX^4h{QV-VQerbgP>P|>t0 0_j'Ey[w~\%7)V(/kgqz~+"NYD/j\o #I%g4)Z ~l6ψdP?O} 5uj7w&պrDwWA>~?1ѕW||PQGU~U}]/˛IpD+*G+wQ㬡d^Z6q+hxYXǙaA#)uE@<-b FEjVyQ pm %PZ]`*x1jt w֔-^"/m e i%6<5(UiXa\  ` (+ Tf.j(t*9瀏;\+^È]7}?A+m}e~~qY7(Fߗܢ7鉻_"t s6-zbЕ^(SR FSIAhkxSj@XS1!STd~z'K$4Xl=-.cS"YH)V<=qÎ%7O!('mxJhN;4jDO1Z", uk{wJ)ChFs6z˂lsPZ hL2Ev;")c{ c)Na1d9c!ja$rD\riz+s $xw,]xhc^"1'j퉱3nJIwD5;G[o_]/fuUϱťT ]0A8Hbw!H4HsR,3`销%[! -,(Хx#;cC YrunA dH j,FV UhHќtiõRpՊtwio;;$z(! Qh|;klNkD2t@ 0?c9^W5;ufWuCvf4$Ț09>Ők*莜]HFR=U; Ι`=v;sƷtO$ꞘbD%:z|SMDʛx Zv3;lݶ/N^}4WWЯbThj9v 3i9"> 3AUon}uX)OͶ6χ㯄+ougT'cYL>zEwUI$n>cEM'o|L\N]~gn;}PEEܻ4SIdl:ͼCΫǞY*cj}j߸LnbfyqA~e\&%'xti77'0Ĝ({ԲN+&hJaQc ކ268Қg[;BR #i xXzvkK/,^&KL5%RҚlV*- g(i>Qr)QWDUG]6He2`TXZDȖ֔`B*eV zT4P:wumH_r8oaaۗ8aə=~l-V&ͪ_UX*6/kO:ԍ&1{o$,蔒ߒܒ6A x7L=&$ -ZмuF;9m=ztۧQ(1n0ݘ[vC9\pЙ@Hx}}3Űf7oLC>ٛ?3T#9"'{J}_`7"Yd6YKTITped{R;"4# H,H V#5\P7G0:' J(*5A2<.l HH"1k)SG v,aO6W q}72!F3˝!k1H i%RA`I ~3kNLsyr.'[W#%\DdJ)X\N;X]5#fejJHw.Y2h'1Us׺1B`ry":bݎh-TVBBsݒ)(S(qlPײR/ۍ%9_.LW>Q:u^>Ұ˗N'N#y]Zu~~8ы^E|3r:.\7ͧ/w/>jv#xN3ʳ1)f_7[R{Kߢ)߳E .pkrs}=[ڻ7OMԞ]ߊ׬U29L }=1%NʲASkF(&T (H 0h\]& =E<^?m;@$Jڻ>6\Y])Z_۰>HCn}c6" !a^G]bDCZjga"%y}q\gfѸ0@TY q<+ў+X2Gfan=Do4 PG~ݹPHEAfJAq3I"J| ICs?$C3Hb ׿.L8;^g&@Awe׌JLǯJ%)zbTRZhuI< }q4Dv.{%+"d,,66"(D|/.p9İbUg3-UWŢNTUU@"Θݓ=#i`uGCFebSwvm>Zvס_ c,goqU]"xjyj2xv8LNh)&j~GI 21ݗp 0[B a_h/vFm_lMwa0o'G=:(GQ8~띇3o-#DV%P #De lJu=s(ƴ5d`@P7Lt2QڔpN6K@݀@f໗\KbW=JIGbv R7 UR1MIvÍSd":,Il BN.S[ϺG\Պ%o׬N;f-N螔[Ï?ɷ2qeܧH_w5{tvCk9R_[FlHϏKz=KuOY7{/|-7WeǣPEs}yQK#7'.c DEaIR}PX$晰ND ,civ,Pu? -4TXS@Xuǂ4 1My<IzҝH}/% Ut/[e 4jas,ZvAFw-%LYJ2 b2;"PrJ }1eq4Qʊ y`Bʦf $ sΪ(Gֱ3Ad rp縪~s.%qMyO,PףG?hy5RQΥu?QF0DPcQiQ~Ju9!HDp\:TGo?{uvB%y_IŐFUƫsPFvOR2i)[}LJ" ǃGT-R^.aǴ!1 [ bL폠rjC1Nm)nVF_#VNZշW!$_-gIr;K #:1r?׫I;I;42aw%rW'It6% ayCRvګhj UKe/#Ĩ:1G '3i\r(%T:4YMd^ZH+82gNN|_N6&@͇R5.PVtF>`)X@|t4t б|QPӹvit\!v B&.V"7ݓˋɯ؆E1YA )w"J9'׭EMe+ S|5+\_^SaL4X  mFb"8AJE4M$ci#v>-(;$/.0_z CZg饽zL=_u4[)EҚy4~UI]/kuV˦>>o+F%Acsw1r'vHjD[XWTYSamnQ,EofuŌE%>C4nWobo;g+xol_[deO'F`L;xGW7s*S֧KZ% &>"0( %7Zx :d$rAh) 8Vى͛9__sU/ОklwBHG٪#@@j8ۦdSbRខpŽ!aVОY{gH-eX10Ug R%1+8 6 :! 3VzZ H&~R# KNZl3y,c*ш?Ǚ]\k7˺WGխ~2.en?NeYd5_(ߓ?>}~X=]>N]Rx?ޝE,ut ߞOq3CO;zfJ#&'6"!˷gPtfoczJ[)1f X5U #G#Ev Ӏ#`Z*왍-TѝeX+dW%@J,$vHM:X鉨Ww^׻ȨwW*8p杯й˫g_\<%]z9P_#> JԞ{J sq4>P*tH׾%'^Iz\7Ϸ7};4swU2'+׽ uz@lVs?,s+[7CR9_a4= ?lulL3$nAs>.HqȭT7Uh훋 'A;{\KyN9"xT3%`BiTk8[󊼷p[qZ]o>je,5,K ݣaϩ33EqwǔѸċr_WK"_f%!,o Mg=F]RQרV!z#aj[^GW'0$k1iJ܇۫Ü49ȯ.&Yȯ&9?WԱ"T"?K-rucm041!qcʉP$;c%*1O!>^cI, n/5nCq3O8dK'1wTG Eqj;R!G,5p鄪 U*PO9K*hn-3ex*",'<, rl8Ky(}xT$#燎Kfm-'ԥT?j?|: ga>%g}9! =;Z$ [DuC gar`1dV020fj%af>)/(^Unj #}֭ݥ~I%+x>X.`ua1@LW ;ӑ "n#V}"(a5/úwSdvkFLûOrG0_|2Q~5$v7gaȯCW{m@3t繂 ]e޵*C0GԽTZh(zzp@DkV8 5'f2`*5.l("0 ;gUӢ,a!b 7 `q7hƺ {):2E۞Nۖ`R}uh3zo(C\K\XExCdE'snS ^Qc*nQ5*bK5ݍf`+@7&\6s?ZqmƠY[pϰ-vkP[o6܌7{vwk~1YgA<ʰ '֪sJ3up,+™fY8A_ay.2'EöLwScyAc9\ QX>Q@#Qfbޞfo~ -G$anQKd2sq:`M,<K۸N&a BubDNRR\z)s;B_z)S\@*wx )"Yd[& ]vb3EnMXo3<فxc7`w-D%c8e!' ["蟟ݙQ>nBH$zֻ@9,uأ+m@\{#CରN[Y# .(wp ܘ]r|P|f \PxL33$;Yǡm٫Li$#qIhL#kikJ|}RS*6*N<23KyBY2TpLe8d,`}\&Ƙ$umkq&h"m.|Cܓ?#&oO`jȄlݟUb<ȣ}=y[D..=qr[_n)čVOXYXzeꆓhY>=+(wA&!qbIU&DfG00V 0$0 QD)0r9svRL_$X(Äp}{]AWyRY,l!;4;TUatl6D"o!gIn2$FERN;&8"ؤ, QQcJ/;eUNMVZsQ).|y`S紇ƔuVS f"Z[He%.I>_p\|qQV1A"rPg5ATʉb5e2wjQ5Gqkq~eLYD>DIGpuF"'(G!vkfCB|kNsJtlrۙ-autЂjMmG59;/:O_'>bcu2Aok}UhB6ֱr7Iz)]rjͺAin?z =Ṷ4}z8kag䪛ӨLƹflGB>s"SMVhmng'n -ꐐ\DȔD>ɎvL6hmng3d7MkʄnuHg.Ud ֯]j7-VĠ_R֣iUB {5Hg.dUqh7EivAѩFvuki.etڭ2E[hB9m n<9ڭ*EDje9L+ZT!!'9+!=DUJ Y94] +=ݳ'{iZsOo߾>Jk CFvӯ]schǛOByrݽa{n^u7!7n{#̿țup[:^uz?uyLh?'V3odxK~>~Nf7g psު4d]lFO 1I@6G^} gS|*˟tnq\" NOc\g$e!)t-Lo{W {h1qA8Z ZђL^*8$u sx-1Ȑ!Ya9S/97( )3%Ov=hړ~pSOtπ43ܓ`,I[͹r%:P쬥Tu/"J.c9=t۷*RwXJ %-'`b52{;-4oc.b=NAW7)/0?lIҘ0Y6f0$DTy+Nwm?i)m<&}L{s6 "Z͵:xuۙs0b 莩Wf>܆1۫s_\EC}>9w)P]~=zf53[\մ1{ϢmiS:5)\zX?P|?zwz/zԸoy dϵYNioQϵAotvrz_Axg'3cJ+)CP{Ypi͞Ĭ%<3]w˕.W&ex=`Q#\- moGpBw Fա:qxpG9E:GvhQTx@Fbg -o-ވl*QDui;kpp|$_`$rzqUݸ)q=s OHpp%GqōI:d!i D`iE5S[ү,Hcj,q1_ڄn/?|= |a'4jqaQE!ȹx'1A+anC=nr?Riy(T̵^&^MaJ;ok!}+cNxt=M_c}:d&Ph@ȬQx<+6 ƨg9XBZTn_ZBy~4K.X^Z TPR,yNls(0 w%.Jէ*5* 8LUG5#)Q8 틯Bd]ej49qU+")յ9"OFw iz7&[E)c.Dbrw۞,8ighڊT{((Lb!`PYc?ČI&[ި,K a9$/{k' 6:5'#Nnu^|6- ig/\ p$ZHka_yrמ >h(kVX!c[l Y\h>^mi ƂvYWtms|.>~KC=Uq퍺ڊr@H/!CEQu\%XŇ^mP{ZDUK'+cCʴ*6 quGT19?BȚ'e{8]8&ePKFl }g: uNZkz,FcI]kS٦;1|$`.ǡI'#ehW(G^}7>K%=b @o|i@chȪtON15W;6H" ">XG?niY%z!:b\X^!"PF7lCVN" eWkCaon6cWK{Zvllkh]'/ 5DWhm6+7 d;\vROˑ>S=T $Ϋڶ}g#CO^>wz쎮P~}pDQf.%AJuٔ6D L5:OG N=2tQ1!orƅnzY: SV!9-gY@Բߢ&O=Ծx6-(H|GG 6{*2|r1Wl T=^eܵ-&H@y<<"^ʻe -;F(KVIYJVP qkB\ A";P>}O.DPp=4b$Obdt!" oaΆ7O1yҋ OuU}X0O=GG7t,"| <^i#le ‚9NuЗ @P{d9-Z?9v&=CAm9amd-K&YBB%A9CdtjcD@ {uɐ#˼Ai(S!` 9`oIqmO40"$q(b_ =?zItI EΒpJi7@6=c"Fܳ|@,Ɏܷ<"!VnRIPt2悞B`C0@Cڛ["S2w^#BG<.TH`:cZ7z_}nK:ńdb7ߪIVOth9vS*@7.4(*8|c }v 'ɨ(t|:-La2^: ('Nq FJL0]wbB_z]3Fp^c#˃IP/;`HPN/xR;tnzyE1}, 9U.sT\V^3 Tm+scpa~o~WH=h<{bwj=_p1Ge5oBSFQuc7+TRwk بdKD_z0~wDnP)<1'7ZyO/jzh)5ƔV c|,*_\f[Z#)!br륧_4< 2^ 1ގ?5z 'O{565jG7?lp|v{gn,gCo%D`V@!>mreSZj9yHZr<\ 6nOvچB*qGbVr(ևPBvڼhJoyt -+k. #%+\ M17vE~s5P\Q^?<7xpl3=kC#4tc!S *32b X{[w3>׷KQ4}wuA@$J!i.SIx>d1q,3sN1`s!T V=zL*\PGBmyd_q7ީ=lܞcÚׁ8/Q\QQsD ~QK\qc]{ XA3HN(M y$s7)vMEZvb1N[nLj`` v%[;38ȌS QšOg+R#QcSۙS \n)9RNz}GE^o_*Sm(J3ԯrK|n(i=lpriQ(DY9h\ 7vH| }L1ʲӶZ`~̖NHNʃziZO#_@ Q:SWkÞf0Gɤ.TF(0">T M7Znyһ)uأK }Quc+]gzt7 ]~_g^I,=t#И}g;?!zfYaGyaّjl &ܞ%qGsvy^x|:%|Lf#0^zwb>9&117Ι.񹬣?!iXƹaP0yy.cD}s?I1jy40*(qK.)͸)8> kŔ{qŻczȑ_iY}i vfl=[Y%Xuj۽LL13%רlb2h1|6|}gRT8wX(7Qc!,*nS5-42ٟ9pZ 6H""AAH0Gh(xn+MD60kT QCGrY3iXhs(Po**I9NJˀ\kӼ(a[d RHHƵ .|'E )%SyjXjThrwU GFf:K9ѢNUUzn[a݅i.w%G9<7kֺFǮ|hZ1?T,-Ī0O\RJl˜y~y<sro@&ìBϼ]*oT3e̊K#Izch/7`b4Tr-wkwS^e+6s;`fZ(T78A3;C4W]#j5 0{ZǿĻ?3}{vUxNt\J,lTXnoOYCY;|,ϾU=0bHrD =- 򉇡=uFѴCP%=[΍Q QEu,JjNZ%]F̉u U4PePnG`V!n2'/U{ߨ F֢S?6Ǥ}lU=yxM,_z>j'0Fp0teַ5ëC)˚oVŞ=oasYa oR/6+Yi9s M˿ .J׼nr]ⱕ(3s&Y8k]8ܖ/q/SN+,Zl 'h `MQ_-}wCcO_YccK{_nNV|?}5gO?ds*" #*$B:`3#D-B?z\إ̞.{!\#>RͽK3᠁80z.2e"p0/5Ohwg+Pd'|LfרڝK,"xdM]R71W\l>j>J^.r-G ]Cl CcAi,D_w<,o A#ZkT aEΠFq;t>O~r5;+fO=J@v(ɌaRYXFS^E ]CpחSOfcXwBbksJzR)XIfmz M X!h| O&eGdRTX^(kWB)TRJ6Tmdk9zMdI =ke4QqP{] +% 4VVMΦ_i4&kg=8zynZ0 "A5C49U: d``4RxhĒzL X,0alz6E#ずh'2jNbR16hqD'C c'; l-AG&9R#U=ԭw:ptZR0: %-"&yKa Ӏ=r9lq<Int y h#cú J)EE *6`!)!:GBT28(hc4 6Z vKlİ_{;`-gG&9ޭ|dYLCH"9VI AW: Uhwi-x<nhL#7 _)A^80]㗔Ms^6\ϽBڒ),ј..`ɝi! LQyԹ #XtQ%]b[ҧ]reV Vy0MJ" JUW.|7J %ˎ$\D)9HO?ɏHR2F`fHh-:-GZڧw"EMU]OXF풡l(3dH1$E޳M5ҶЎ9^qR:K'Gj hM*jn|G[G?<rcqܚ85cqC(a:H>.̯/"c 5G6lkaX'Mҧ͓ŊIJZ7/rɝ8q:vbCBkl?H&3*VTCMkixߝjUWlyc[>jo͚ O8w'^%o"mՠm<Z{֙Hj>Λx#M~]Y\:& n(6H1*`xoy1KF#ު4Qs#c4 w꣞6W 򌣆\c2sxY% rcƻZ,bܚi-b5zږ34Mh"1v^_s,j8bX )l5{/`QcdK"!5. xWqpzp7c(`d2՘Ѫ6o0(oEǣZn7R>2ft+O jJN>C&գOzD) ] -ZG5t s#=l-/?Ⱦ}| z+8e3ȡh ڹ ]+8XA zCUzo TX e>9&B7Y1Bi̴ukƫ'3 g"SF{C~?$uWqQ{۬׸ F0oxC~Իl\x-býkŖ@5Fl߅n"j=b<^ĴaTyP7kև,7 ?MZlYnYlPS+rʪ_..6 2u 5N[>m56n@^7k@"4z>O39IT ny'|4~Ǜ?Oa1'3.0}wV9],r=#hmQ-xʃF.Gc=LU'전A^{}޴=cHn+FMeF;$5-S 8㒈 \dZ8C ^82h,M; k׸ZN5j_ATD,GŹI S]I):Lj1YK-z< [XDZ;{)R07* ΆT(P%)"V &L<Q6(gd ~0)9XLgjj3I)yf$Xgv:Xc^0)Xm%FqDei":<ʳ xP '&VF)F\jnj!Ace#s ^J, ALC)Ta:Dm&·};d`Uj.c1+TN;k1qT#KbEyV`N]c^ %prJ9NS͹h1YG4P{}.'}H#-aš4%[Jr>5c={[&W%?#PuA??zAʑb1[~{YAu6&eʂI  " |~(0o.[\F.31_iknߏU4McK .ֵQn.B)=: aFұ}>OvJc2 Dl?$TYNP.% `Zasr!AI論m:X_ϣT9V\N?jN{6k|, EmTT{)&ԁv ',C3jzeSxeZI!SEd ,vWXpYF`;,B82BB'غ@Ym{|c)w g~B*tQ #= mّLD)\"ҷo Br*]3Zi9,fdnFѣc07_wXަJ[LsrsUuOY.ky_y~1vt*=nnrZʉYItH`ᙢHMPNXr`i2QȜ]pbg7c.% Cfx@,nÌ1R׷=PyvLaљ.71- e%s&g l۟41KxI@;}#0tvH5\F bi&\3 "Y@x% l2k`2/h 7g_ 7i^mO kss5fHԫSW?I,hUcz@&YQ<4'Ltvf~W=tX3m򙸹A<5NC&ȥD4m?/b:W/s?\H,9q% >UCETUbig$@t-͞FiNv 9 mT0 .eLV{G7クfx5m͍W__ EV잓>ľ'{Z'6,WQxuQJi^i -eu65`>TnwGs‰>|%= vA0MLR?8RAW%)fQ|kQ *%T)pCgX b;t9ٚ@p/- qrf4*yA W̼Zh( 3π!"u.ї+X:e D*۲iܪfרiIvj9fz}1)i |[?&1'V/p8!B@^[TA)3M9FR+90̩Y(2-Fn>dkut X((!؉ʦ~> ٘ `~/:w Myr 5UXuˬn[~va528-nBy,SZQ0L;!9u "qHv87`'T~eDwo]aYGNVi_mcSe22(T*904W2{ ҒbS#3)8a qpr2$dFY)il3"wId ,M!gF) JiA+eUp؞U֥uEy 69uox%b성15xPl#V>׻l:5X#N׆H0oO?c$z0%Lbv\com/cny_ffY+ў3 JS~xӈ×8M"J^.⋞ݽWO_>~˴SQv)/>iYﹰ$f+)S. a8WGP4٧LI~t;}~?~ ИĺWiιkUYN1*iJֺ2uRRIcᵧqqqx0T)=-bJ-U#:TfE}w6BIx‚2 `&U[R`P'I8fX3P&?X[u\Z7mt,((ʃe: J ):,0L K醴*s˦וђ2=_` e ha9n4CL FA%MM-(D1:NKFMg@MRRHK9Q&ͭÌ2(ҏG޽}}Gc;J@h./)zǼ(K#HGE)wJ"zc^^*V"oQ q\z(}EVU 0U&Z9bF4E*a%PC|-d H~ ҏVxy?Ƨ;X|]Í2*~ Ĉ]#$S SLJmZm/p VCV9{gF?QNN"itl=qS 7&P, $5~kM`6~I7=Gy<3Y ߾^qk WZ'9pӽq(cE` 泴!] a$#Vmpon} ! R$n]ϊZԥfGYywRЭee$=>mҒ4Z'o+ qVULc%N&aDD%OT#^rJo1> 5ρ"哅-L!!Ч,%cW +BSW\^SGOKx &y'y+ 9m-[ !uJBX ӹw`o($eIK「"!5Q…vMtA10.(*Ax*$w(5 Xø;K]>􆢣ϸfx? SՙkuE 0{[]ęE '\gq23|яw(%_ |8kXěd N 堡 Sџ9ܻf% O3%,'hCXSgWJ.\R&vg64|RڽoKļ5aoc֤`9mO#x8̾La24A.Xu6ɛZA*?,~Ļ%bK..7aEGv0Z>S wo dDmDk78S=4l2` zmΝɽ P'4G!GN]n3=H8X^7Ne1)OA|a\m\kf 1CS(> Y SeQTWYCZ'-l("6F;ƒu|y:#0އ,kI6MRt;ܤثƅNweqI42!@f l=3/6<QMNwSȾ}:j Cb1/"ʌo!fv0flDpsݴ; xWw˿WZr/B`B lhC7O</yZeM= 4Fqw[^T궧v~d#LwG^ dD#spJ2[$(jLB1I^%`{diV3 _%_c^4MW m(І6LWI82)trK1B bH$ͽ*<^:AiGaMT@>P!GFbKD!y-a̻JhtV 䙠Sj[GASpiNҊۋMTZ=hj:ρ\)^}!B>ic2b3VZմ*WGﶦOM9N&e/+g/~E1W%;חbGɸU۞>3.G윔P唔s|NVQKZ!ut^x~ePK݆Uڹ'PΖNcu3Fdq;Z'Y|*v>| FRӘbP01:XH:f)$!'Sʜrs{#Yj%tXLף/㻻(ף4\^^LSnoHcہ@lZDED-J>nA@)oeWBQ/rUDi BR{UT*NBQ.v%NKFv&zVtё?V 5"+*$W]Y,?ᎏ3ŪԶU%#]H'QӇO8 m&9Z2:y_|y{J.XŝX~Z/G6 ^)J-0]6}g(pp+4պ`w9ZͱQM.'wUfMjFtk:PQP#/C"dU0 Ll/q?[*ѭ9;uQԲIW+HHEi'9-G8Hc",Z9+Qy$]`+2IQWzbZ,Kj ZJZ$jI*Բ]mL1Gpf\lU)hJT܆FF~fY"$IB<Ӧ(F_sÎ{_WEY4%[ݍ(lp~'}-oPB& TF}AƄ8!J7C(kNsJ[D$XOv2[؅lcluZDoG~} nnk6O.{Zqem_<S?]@S?d>6w_u>ǻj^&R>8̐C:qk&ڸ7ƽ6Mj'@^أV38a.Y98(wX"ᄭ|r':qʄpų-`uU<*/r~QWA!lg6U*XJ ,Hc ;֪.`JI]D&@7 Xw9ԊnʠK_A8C&D%|GD:1[Wݤɝ(KN"KkBnSemP1"rq"D< "-PO8wO(8$q\Ÿ%%HZV*# Y q"69X e.M|0̧&K;>/V}eW_2Д.zwZh%Zm~v+7A5/ŏ~Eށ*#`Z"ȋl"CKx$|ѧw\ƈ^lU>3ékСprc S!8ݯdCmBT&U|Z**FИRM ]p\80q([)]J-(W3OJaHWZ 5Rz0nc vq-$TXnT}J%He v ,If8 ],@T&auHP@!@vbpdRa\/u̧ JqC'Z'|QͫJHI%ul*M$_qGvHجı7nՌNm|rX$:rt 2saAA@Z<SZb9ȈuXN01LrkhIђ]cy]k@LIN[\nnƷB7ϓ uq!_X$Px-V0Nv H:Y0@g8AJQʠYBi-EcE*v:E9C|FI6p%wrnpZ7׌9 !(\\6Γ85ĚpA{&%`( PZ00-ME"6"4t[w= k}@ 9CcD;P$`T1/-2&44ߩyqH/sKcc/.ӱTuz̢7 R%7sۗ!BzEe=R:et.˂?|wЀQ'cA9dLHq`k0a,8"bA@u &q\b]qwT 0AsG8`¦%z>+41Mǀ 8f' (c CeJKU.8GD*ui<_2$6ۨ`CKb1ԉ":iS kk`5p\Z~|67oV{?n+&az߯JUZD@Dqo!0Uƾ(O, gKpHA*Ka_ 8Ou DFA(њLP20%z])% c-ˤɁ@& J- u7ZS= ijATR:u`S<5`hǹvNQVVK写auF8o`W_UXD }~bŴKףBMtYr:⧫E8mӟzZfjf؏߅_~aDl]^0κaۻdْs?~vJA`(ɒbmfߢbH7LEc}M 7(w'CiBʤqi.fzd]R}Agl, ?0IRW ȏ4 hAgBd5ɎD4_^ï}ֈy ~e<˿\]_~SeS=ψ͛ӈG)I<')/70EcIoNG$vKϘ2L36#lԠU\Kĩ'faY%r{>_PM6#h>{_#~_aҫ%|yI_0a4a" F'4EdEfQ'muR%}jMEދ⯹'z_௞gY+?@$ѧZ4LJ<:h֗h:g sba_Ev( }E䮬GM^s 4!.Le$?ɵ^7M:fwt4P{MyNu[heI|o\դeISI_ul cCY%f2$re4о Ɠ/ėI.ӥj C#J:w篟w\ɧSngbuMCTդLϿe,j'6 N4ȉo݌:dX̭𮝌#{6XiACۓkU1üpv*C5M0hM=u6\8ϛ;cbqϳ/㹽mC"fە̳;3mC,4-Zњ_Py+C o8ànDۗ.f4Ia˸.M_"n-ę6#:Zʞ, 7J&iMmy7oLFGż[?ǹ.%unWV)^[㯮G;N3GG~h9#|u*Hv쒣s.؆u`[)ݣt[sz'NIрyY?"i~$kxwG xΉx` rFӄ}0NeQn9&рsU?s3}?b+ǓӄG8{/d Ѱ~׏ٯcSjnC#V ؁QsZD?ۏvvYH?tUCSOL>zܽ']o8}+pk"FWYmy)MXٰti&TpUeNp4`w& dGGy¡-24ۄsќjm?M(#.}Ymva'˵iܖ{M$s he<۠lOb8b-Q4 sQ_=ȹYZ*J){7ZO LJ'LKïl S|( #WpiEItY295(ܴ*#/r:,t TѺCV"JUHtקs\씶Dj`)=%sl+KEs*PB%Rٻ4mW(^Nrv@}J9w*UIRU"FA`{WY= iB03\]aV^gcG'!J"'I!f-,r%4X=~n \a$ ??%[z άf&O/ka/;c I}^)9H6MOm[ElhSH4\N477S3Xh} ѶX]FKKWCtf6[7 {x-NnJ,Nt@Mn(hAͻ}-PҤܷ ̓^,45A/ڗvri245Zʓ^ ӆ➃[ʚnCpEh{ŒVasmvIմn3kـճ5|Vm3݄^oO-]UI@5N2`)2e^麗[q $jHM3( )h7 bAE>S剐ҝl0LS2ܚ?3S<u>/@Bg6^Q:3 _:Qg:d2pFᅫO 0m÷и; Fa76]u4z7fd,KKP2r%08j'DN>я6ec`2(>xͻO;q0^5NjB2}AQCD)A=`k5Ha5f!(1ifHK,6Ns0{ dJ2zyx\>N0w nu: p;.x2UOU=/ ((HaƄqTk$C@f!)WBp|P#?ì31*}coTxnݳfzo/勚"5%0 S P =ңSCGi.[E;b*t^G`;u`RPB(!~8.vB'N7|4aJ'v3 0w^yse:ݍCgɧ9ce >v.K&UFg,Z:*܏m)s$2pJC]4-"t~Lgs}ϝg㦃<)ѿҳd_YE@& y"ZI9z+]=Ev;K4w HڭrGj:$䅋Lɂ2%3&oE`S2P0zJB*t˝Y#0Ӭm6+!e' k/(I|X4Il -!!(AEJ T)XF\`H#TA*Mj N{IsN{?UZ.=(+0 ”HbTjJ7ZF.:x嘓ԆlQLqPf fXz +BRgp+x$PŬDa cY[$SqK4 JzEUBLj.W!\<|= &sIg<\gӆ?fp~(^H w`ϖ^l-THaUsDSħ/?8HI*0aQxcPX IsDGp}fW0]v27K%81}z{ws5Q!o.>.nL6>37.AD勤·^wΖ|8g ^Ы_;Cs;Q;%`G#>5pmK60ˇown!쫴dORI/fkJ.tQJG"ZQcGZ-r+*a:"HF|5,$J2`38DpKRbOS~jH'}6(UoxBa[@?: ]}fC-,m(eQSs{iocL&-ctۙ jow㺲_V{.rj/&PA1Ԓe&'_hFKZnp{ހԍzO䞾32'ƅ|W.jO0`-ݠKͧ3&y0Kgɫ3[䭏_)u0uuvfny|E7 HDߦTB& (=;_ϫr+( wY٧<.0OC5Ńaiv*H){*٣!|5/}w󲱿[ffw.GAK D4: Ue%{R> 73"F;DuŬwo*KI妅HR{a;ܮ?Ey_ĖhgN0o%fEÕs ER]Jz?!sh>N`BxA7U1a7Fz}Y`F UZ뇢Kƈ{FD@ Bh#pљ#$]Ua+>M#R8(a4 91+,YGt0ua+W/gHb/'5gyN+@(ѨGr;EU@7j!#Vz 19m (#3=`c o0]<$CKmĴd@;?Fnn8onh? f8+$$MToNx{TeTG6찑ý=G C=$tȗG40.zRwfv o}λӰh ƺ%㚽[)+PM|}9(}n9yw5S}M$#5룛>- &$WN)CMgV3:5jan21#sv0~fhiz'm!q`w -v ?NY% 2.oE0yt{;݀"{ӟ >pX #%go%q&(k?TȕnfQC`~*-;&wϺkطgQ>՟d{3k:zݴPrUi{§54[NUq:`ĸ61#=-ǯ]:"jzq:r!"{rm>7CtSr{Hۦ`uQ&wA;OZF^3?a;dJ~HȦa{a"wo(Y,D-Xj53Xj*ű$:QFh Y*V bVq5P V3YqKh W)@Xb5uT =&10912R^AiέݩBFaMw3Q KTIC :IRl(iRJ'-i9J&Z"He%}y*-'jkweIz`S@?̌ @ȳi0EumodQW3UEInIT_DƕDkEl%J fRrT4sY&lUq h~ ߂pfs] \l+P^­?:aY/7 DZwm("3@8cl=oF{V Ɔ؝ѿW6**W:jZyx Vr!6.Xw1B=,=HƕH:}CN8ttK3"Y:GmhG}{]hM85t8:9{)E?ySEu*ӫ?|7GҞT=Q@HHyά<)~S[e`w|[a7JA\_޼|ܸm\w7_HZju 哿df39I0Ra=W>u{?rsk\ &$KsYT~vid|OƑ5{9X7r\Dse 󂁋9ߔ(xDITY򨻢d,KshՠVtE߃΄x[d2gY:>I rq -Cc>>ψd¹2Q:[`IyB 4 X;4+坬x;& 2Cuu1}+p-=VI˺Xɡ߬ݎ=&#weR9W.$NJ۟éDsߨG}A^.6ҿt)o;Eo mB>G PW}~ra.X Nˈb^ipmnsUBJ{7Sh=0;EJ|mPהbڇ/a7NY}r\6r,Sa ޷W%#b泇{fmUV9̍^P9- pT0P9Oce`ځ]C̰U J1ɳE>[9"n}ZmqQ6ԍ龬c9SQ<4~Džy|g H7˧+Sn:SK9Ԏ9Ǘy SLbe)@Q}A,(ۇi}?l-],?sNn<}9+ 8[\jߪ?+d:֣ P7?cE 0Ms8C16Da\cp}Yd/?!rͰbE+ws{=xJ O|8v\! l)v$Jƫ55x0N'bY#|'g&r(?'"d(t[՚OWS%NPPcukТt о}דnGA=tgy{Q7Ⱦ_-,T69dJES࢘ I&ÞbzEi^A/,%jF{WJ?7mܢ'iѼ㱙?ro 2z/-Jە*6ZA2ɝaɒ%Hv.H |>xW+y[0ܠ3FRi'p ^A_ZqjR >.ɟ2_G+3$Au*S~^_)*tj?}O_fO^$bAH:𨺨cJ֖ȴ.IfZl{uqXAMfxViGQb3p& `I԰@&R@ S`H$OKCNI8G5bTuOe*X{!Nt{0f6bi)F\5gk/=S1:ӂE_1#jtu6 7DJpXWTmpOX~(Lt!F"2ыr)k1"襐VypSa 1J`(x֣0ւ4:PנdcĘ[ x;!Qo; Cyʡ̧]p&RbȒƘ@Ȑ2֢heFNf&|sOgwg([A,+t2ޝu(lnь>+L3D9)3^oi`:S{ٶLj @1`E6BOroM1 F9gް2$E_S>Zt(B>1G1p Y>S_O,w dHDL}DD("Y% !(!ZwGE{YdHL2Z?*Dt4X$\^78Y;Ğ2Z%uQ,fmc89. v3w@6;C[m*A$"Ch榮XE ҜV9mATmX@2ОqGf㠐ACQ\" -SzGEAS2F1RQ R3ǝTP-Dgw7#37uecB= JP^~a FmCA1ΤD~$x׆$cPv6D`]?ƺP, uOWn`smmve?E?\E;_}>.{Yc֮jPVl vh:i} YWo7򂪲R5JK=Uq~7f2\-ROngoy]9>5s&߃!.üp&d>n4LfBK"IUIE$@CG}9D%co!:Z e6}(eSDs|-26M޾ϊ]ygicDN(g6i]l `mOI"CD9 ݙ$}c:燮NzhJ+4Ap^cA-TsװCD7.?ǻzݻ":{ JT{^)ONu" M 8EPӆxF!# =&l+&h[pP|# *rIƀӒ$.#fF&50 |akG E3`D^ 5qY""0&Til+8\c#F0dd¿ipD[sG9D#8nB428 jFk]` 2j\ɜ;2^?(B߂w*-3UĬ 1@e$K3x$eprp0"5)z3I1Hm+ mq542F_mFDŽB0|mL#V3[}?R<ؓ8;?>8s쒜?J {{c_ >?O^dcx!/׼Ƞ_x,VoowϚj<^+\Ute97ya6&/EV]NGs7ZDoб{K3>kZ4Tr. Ki|ޢc< wgKH}4?6G0"1Ш/V&B)uBkAm`âk??Ƨ3m;\uU~,CQST"0 *q̣BpJSO 5d!5o:ů%WȺEyBb7+mt%G^fy9ɑ N$ )BENR1AI"N'yTB*Vh:Q ]'? tʫ:DB2%褤E' 2>1a*Һ\cW|^+ejaӵm@.mx 7ب0;r! !o{JaKb :CՃvPXXkMőe)z(gsd:xd'I_1NQU!di;|3k 3b\+ni^m$5(O$1`lJXԞ %:Њk"$quiRjREhJ*'*̓逷 Y|?>ם6'ܡ+Cl `fÍz_ĵ np8~ש^{C`4.ڗTQ")W zO%9Bѥ)xjB9ghRjCkoBgQpQ%Rv}̺u8pu`;p*{ןkmV/=@>IEۢm!pInlr%9ME;\Ieq}ȱ"Ù3h W[7MMCS9|I6r8Chܣ 븁'oo'?h8@.|1k-t=pաNP| u~1x2[V[VF ?9O)>(GW3ӿ /뜋^#?Mrr~vJ'zV l \- Fy2 ɭT2ܭQCazTOq#PBм 51 >251'&%YSĉN0k)1iƓHHZGÌ^jrn,0 K [bBiKvv~}7;&On$Z0zd{p_sXzᓍ)D߮% <%?f7nKo'[%SI,NT6;}dIb}2"p L2\ lg֌)>ӥ1:Scظ-">T^M`p;G<|NnWy׺Jh-IuqףĪ3~+BukBB^n)2Dqk>}URIq*eƔLe U`K%2f6G47zA;s7=>'׫^\Am0k0v.`S>Z0bV:"oބ~0_H. ƚG5f~G3}5W Ԩ$W˱nԟ BH[9/Pǜ<ڰ;e鄋0(iSE 夅{ %[xwz/0|J2ECTit3\)((((((è n Ns)912l9X k"5R?nBD\-nh* cuUɘt%CzXj6Rvӳ8d˖_B;wyv7U-,$BudJK o=jQBHeYYZz%LaiPM7[a4 aڢe *^SXg^ \BF|Xwv+FJcI}1y: .[k31XY?=taq!íls *{w[I >\_0{PKX,yt%nX}zPߗ=sx#_1R3c&qw pQa^ 岸|fogliUz\m>ֹl߇(be2s vF~RqT1r[ %zzfk6GulNAЮ EH le>WM)K٭Y7`=8DzĎ|VaPM4{q N^QB@xRyt{o@i,Z x4P,5>ݹnxsnFSͽpT!NQ5po1U!$9ispYR]\w>U%Z5RD?չx^1hbtSk -rwRJꀭ䗜$LxYHMhyL#Ihܜ>Б(Ԏ%%㘤 gBD*MΝa#a^"& 3DjQŬEf+xL!wzѸLEL0\${: O~:WWknlvԎW\I2s)3_ޤݵ7n =<tq[غ&nT뚨}5k.R}[%[Ղ@bʭHTY׷>C?}-T&\ cX݂:XDxob"73ʄmLR Eѕd̷uEE"BGx;POsni_de:3ΰP l99?z*|Pl{Dհm(yhޡj$r\ps k8ެێHдK6O/aR{:+~k$j Nm#RWz3TKɻc=,Is&Q6Cs| &!fnOwh*G%֕vTd`UڽQq[d,+7眚eX3Ƴ:ul[GdpZvA[#6C܍e&fcfƼf=f䘶XWif<:^$yY,Yۣ>e՛[۶xb[OnwvLι`F|MXkKƽN=PBo멶\Ϸ'нBp@ML׾`kOZxszH4U~ol4:Ԫ-OI4y^&e%pL4VXC! VP+5 /q! g\z0( cer`5 (Hr(Z49kFԬ"1U͚ ̍sQ P&9+P!,|%l,//rK5PnrLjv'˰'VY- ͉)S)ꥭjtL FSnrnAC˭d:A8ux7PŖALPuz\`:bK{"ؚ.I2ٯѻ֍ᳰnĈNu[~Tnۺ%kݚ.I2UܵnET bD'u:֭]m[dBc[EdˆIy5?U5,C)և.݈|| aGO 5 85ᓍkRWh-xѭ-EG o~=3|-O{"a%;u`IeI:M} *- i>5v^IbKb}E {RQ:-57(JcFx4ft4=,\$V(ҧ^'aK*,Ҋ8Tw~YƸSP U)g{__Rz>+CAjZwLE_+*V--[ZkڝL$W]B7'% $mDDX6{ s~ӛsJM'co拴V *0gz1},ҙ!} r{"(+u'`=ڐAPŚ=TϚ)t55 SuQ :0gxY×.Bān$rFɖ8sYsCU(ei! _$v)\Ј6̅:<R;la: y< 3T't ^R`,=_aizY—5]" Lym?L1A߮rrrrT f qD 7Rbb%TF jJ$ "LHpw d{]`#BDKw_1h @.߹v;ER̸^:I$%ly6V߮l [2\kYpܯUkN'數f1"~@ބ3L!E}y b @EիZde2ByuVM~##H,O!Myw-k8c3sι M'ys%费 R!ؖIX@mv fHL vG!I{<BĐ8ԨaVdvMY.( W138_d#>yoh ko(~qY.xgӎhkϹŃy[X`"y1GyX%9@E,]mNJi(zr\3~",h(DGh}O1 S*°1;LR9{KSDGT:< tk@Zk~'AWz(XQ֣ސonra}.-l)T^49cp!r$Xp[0(VNԐā;q- MT|ul|o?,1On?ՂUbچ5b]D6(8 Dsld: Y*gdYhjP~}},)Z^ڛsj@ =p(0d6[ܵ*L¿ tLZaj&x5fGp7%Y P^h@RP*A3YMPb|F<=!u_#5MξV2U_ZZ9_Z$[vRɴHyl]+簞*Xa: #A 䰨DֵʼnxLV!iV^v;)SԚQJ݈ݍqgnoޣmxd'gʚ8_AezYIvLJa/`)æ@qoV Fu7 _"/L& ^_ ߲[. "i G<5>c Tm<k6?Vö JhKfnc;w\xhݮffiWPm5tFjGo0Ctob#V[;Y(r.֍ӄb$6brB9ff3M'J&3$mX*Oo.&F൳aj5 8Y%穕D3<^Yky}TKhKogOk^ `V`!BK^*z|"Z<{83W6v$p37͙:% d39y_b2l|vѓч[v||_Y-µ[]Vm\LS}@[Rka1bNdfK9'/,Z2MW,OC߆Tk@PԆ g`T >bo 0,UomQ[BPth1ghkMI[B(Cu)uƼH>L3|٭>(Zfee0EEJ r9z Gorez&]wZRT܆Sy]yE58CH-UV/FKFrѾ/ݜtVsӆ*V?` Dim{!{'5T^|ҔWw`+/KO|(j$3µBC$ NRAuƜ8s$&ufdHϰhϫw {]b4,re+UJZ5SF#F:\Ddo4?VA>vۣG[KԦ%t\[\lgZO X|x < B{&y@}zf/xi~lr 2^%B5dzs ݂_)!Qz'8J3\/ [`PNJ8ngԂc i,Fo. 7R-HU_K<-W;6wWQ bo)w'_cNhoQ\v< /yj.A [|~f`~keX`Q\%6p|LVZCn6<$ 8m[L'Jh{gӱ3c4hZHxlSU\^Grccu5`pye|ղǂ:bXY_d\U~'s/+ b@]Lm`N=:x> l|lz)s>e-8žmݵyHQmGFyXG>a f4%ļ/ObBAz;H8c3' )δ<v{I6&6Hԣ29ET+d.R&D60(X7;,bLq}%쏮j,.v䧨u n<آO :Miޒ9f,?=H!Ż7^a6s*4=Ut]?<:8[q$#;xJ/$Z{ړ_0|)Y+j7J>dpS@X>ey@PFxϳO59U=~ 2;8RQzȯ!o2 ;<ދM%%-M*tJCOz=^3`/ & 7QYѽJ7$Z8i=pώJ.Ίu%xS6/IbIGfE=,r6͑ #5xY'յHdO!sde)1t.j )G׃T_)ڧ,w:9zN8=H5%yE☖5VRGlZ)bAIiEQ/H9L!N'kJc'HI"ڄˇ!m'ys?/av7]OrxFWMr&S87{~0?.e"Eŋ$uY\,nW$"%nV8YLt  HaE3!`vKBy>W$GeF:-23#e1=Мx8YaJ͂\M\ w_}ql̬,_C6Wc5w>pm2FAsEMb*!b2ZHj6? B= ‚@%lh"NY0T  rʭd2H,=E/7D":c^NB*byDcpna$-5Df]S&s&r10K$@D" c$#.Iؘ@DѕvգЋ`.h" \i14X|{ vF $r׸L>G޽]~~Vm8N?Al>]>p3 -!J_`ƇK^x\=B;lzC\-~6'7іڀYP b"X\¸fRwW`[.m0H;O&l }U~4`RL4dEbEbEbEh )8֜kqcĆp !Q4F\ )ʻ)ˎfKT͖>!,S"mwYGPV.3'"ރ!2(! McFG8UJ) 3p3#:rOLosnAOC+->o,L!0+=% Aj<|kR") U.v{/N\ DS;'8MSv.6[aMM%rÓ5o224zWUM:ae8yiz]Vb@"FჅ@ ,Ғ 9p` gPASN@+Zg@6{skT]t ӺV,8Ees;3RbN1ýŞ McOaЄpW]ۋ- w;-#-(BR1 ؖf?k;7Kz?@&m1k}QuӜɉVc4ϵ?Lad+ k 0BoH`feK}&ٗXhb!DS"[$T4F e>aZA:6;V ]Sf&X|KAD{͛~{5T A(s2V6ZBdHR3jI0YUAH0'[GsCpLK.u::I [&Q!!FaDT:2p1"UgfZ#L "$p 3b-y :t Xr0iiB̔2Zդ9f&L:Ϊ(e/ݧ[ldۢ*N'pܦkkW{SHܙ6T&PS]6!78/w䊁b Zu}:[/`*(m[N"O,sAt&F*U;L#s;E1F{X>JRad>O&ƌga$@fP7,~_3bX[7O%?ufP;/1w:v9KF,_{<զogkr(?h;ŀWip? Gp^$ Ryw,ʏ+{ӑPG#MZ07ڴ=_3tuYQ7܅Ǯ{55 HhcL^@ΒS$Wuɿn~77lGJ]N *(1pe< 9 CI݆EB=jW_~ b6^oV{A$P?~߁CXv'bF-!yǹRm G:aխw:M" D)8H52&H ̱SJ#g5}N(T΁Onb3r~pJ‘] z FpMH}Qܠ%5(,fQJ:P+ BO,sQ؝oXJbGMηiFsU?Wf<+|(I]$ iQjJVUO.z6HMvm~W8Rk-D|#/lۥj/函USBV/VܧZT&s8o͜NIIXwI$ ֊6=saWFוhq?ن5uS6iNoVjCSweK` p߆YY\‰0!W>.|dA5u1dvWp~&e+6 RUM* ~/a|~ h`ET*bt1wwOQp=??K@4N-LhoEQ/%yӴ__PX4  N`"GiwXb.VIΑ5GE. s4`\yUz8Α˽5qsD$BsS5 Sy0nLp}rJI_s;/əw?jkG^7n]"xG2yʦF{ouc zxzǼ7w5PaqI2`hc)A{x!W 9+Լ'Z3)1 3](3[3`)(~c>jؼFx͑d\lT6X}' G"i[L$,5H90nE!?J3ZPljQ:UHZU_QAL'c@{bG@.>O6P%,}Fъ+^OZƍn [}[#%8 ;g$mdNp3ׂXނ X#cI6qi(23Kqh0O##tzz6dA@vN:<#1Cq{7os<WڪW32,N^=B25>psna *k}9V7f+,}mDm: zU e_L_'Px?Jts;Jj@Çe6Y*.G/$@J+f5abd,*.E^r6{1Dt}fܐlo@Cj, "Q*ċh#*`6gU 9 8o!Wc)pYy9AEN^<;:;tt߫`i]HĭVcj-Whu]7cvj^9a)VPxO(XĉV*8IHlО9woĆCלV}9;c5L`܍Ӵoi}w2s`:&& ES}&L쀈y$VD ᆷB,zNfmǎѥa!kZGZǘ2[i#pq`aJzo>)DkB{Cu-3(~ 6Bl8:f ൙iC:6ygH*GP ׌`ԝZTD{X]t E] =9K͎ ̓cyvev<)߻y aFG<) h䶥cq#U5ԡ4 X }cn$+, m~u'#?g+Qj9[U)׏7+ϫrvN09QY~.ā"OV <Vc^ ܤ}y .qzKIɋ[uQ_a/ٟo"߶Gn'o.z [ջ ߜ`%r@LGj8?YǯdnAW٦ڦM-Lp(mmYA4;#к뗔ζ[H*"D9W'CԃYɟ?<|I-PaA0\Wԍ=s@noa2p~# SO1f5bE6¦qUE; $y ,О9ցQ 9K8i3 K׹|v8jI Ϟi@ c:߳Ї$˿bjWzzEj{.Z=V3-@7Cu0A$:::G)K߂Ed?nȠe6f%3=PYf ^Džܟw˖[|; h}Q p `Mr\ <a7SN5`t0wxZqASqWu`jY5\ڇTn= ( dSC)^ =A9p6]O^q[ыe.lgҳb#@lX2]z"w)T6C1ޅ [J b= %A] &H&݅>]Bj$40DPy0B(FE Co[XCa70? *4Ct^z%FRh%mVk6()ZrʄVhFZrVWFj6=|3l1Ww7S~ CShauGhiCJX]nȸg,SNN4s:N]'=oBAPd.4T-EwIfq:PWIFJr:rCe=To:ƫ<7!4wl xC(_U1 zTvڰ5r gѾIоiТOh KYzx'{L84 l+մ&f'(XEKa!9rEFPkS)77:9Z[èe,k.zaֲکe={(vWT[oU7dn:_vOj@BԔ-}ϻ]wp|q#}ò[밶Urs ѧ 6FFY 36^ fedCpYfݕʬ8TF҆g3櫝.>[+Ζ_s?{T.wV`Z}ݜmq5Ck_ʻ@&*&($V>+1;t\|2uO'I8@O2C`m9~YsԴVb8qlfyr×}ɹ(h/ܓAyZk1pϋn}od'MC/'_kx9%L }k{8y XTO\Ǘ0-'S_?b7Ţ- QzNo}:ZᇲhgǓ>)_ _jrYűTO"vu-3"!FRcNgʄ1%t,nNRQlQ3-AB^yQyP|G⨎ -QBs7R4b5P5GXd~87V彌 KHI-s;ɛiFԆjvw?谽q9'{w"qz.'Y\n56Xco i+#mw엖(%9TeBG[cdQu4BhZzqcܨ~mUF|qB>kw_ie9+(;o*#BpaoF9rKQ]Yi"΍~"$|` ),47BB֏AY>И U,%BTz{߫ %ʉRryWn\Ghcycԋ,zr:OSӢ= ?p@'/It &9=zs|~%?)RcvP-a|흗NPOm_Lb~~1_?=ʼn;heǧ)?P4Y{<9ğ?md.i#it^FKxyxwL ef dG{29q#`B$FMh8wEr}>KcĒ2{CfcZWlE+Ev`R<77pƿ9mO_.8Q%(Z7ء5NL$QN5J9q^[O3iìC,E2X_nk.Te.9wf?,ʱu:RttVcIZSй <2E=X/4TI]8\TQsD2Ce1pv*]|s!ikAtAD 0=׈r}@1A9> áZɌ?䥉?r=H9$.Mj̐hf(ݳ2@=\]#.KFgjT|{I^PY]|c. `9JgD!G|O++haɆ*@17kx:)n(a9ON $r&67CR Ŭ<: F`/Ptn:J{)Q Aq̙Wc x:d A&ٚe P3)Қ9Zim|H庖t:]J.9wpf}cY,R* 2+?Z*e"?>ȒvPE5%C. ,Ht2)X"Z 61].T(_R4m˺[0~~K4{5?]tRkG}@.n߳`H@Y_ynw~7p2ƢV_wWaURڋėZU_V0)V0)V0)V0iZmWX >nXPaN! 1;0Q|)hRRФIIAf ~+]~{q="bac)Js>ΜYE\!vK<*{\іE.PY4Nd(0ZGGj.^qmRs-3dI' Wn5neV,>hijhEE~IOxp- :.~vr`e*ɤ<8GD pNЪ*萵I z,~ "f~l'W`stD"I$%9q1NQ]>70|Ε8yx1~|O~>X~oo'<^~BӃ񋃿/g;[?Г:xY_,L6d~l>_޿0z>z?I:9xPFS~gJ{8_x7(}AqĈ],ցѧMi]-o40dp83]]y:xgg˶:i xo(v.~x C›`>t;N=xg$;8z+$9.sוQEJy;uBy\ ?|}ߟN:.0ge|i ݾ,؉ۧglzt{a={W /zr޾R>X:~>y+WgG #eup.+)=t_p|L?j^wzЍ\9ˣ&Xԃb{ ٣^l]eֽ 7;Ci/}37ZSWFj*7pTWpKmx\=[N:闃ʆ R b2*t-̻)_57Q](!tA Hb43Uػ߽~N=̟X kbR3_?R>=؜twW׃w>voD[/x}ۋ@,H%XaHHj":eޭsPBxC2;Kb1J8ȡ\Eʡ\E5r-Ιux[ʒξ.RBR1S5?gΒNAG8$9 Rj,#81ArlJILEPb5!gK$],dIz&Hҏ?oJ!!ҒLN՛̩dE! "],e?Hѭ'8(Ē5)ӇPi:lu#dֆ;R@]<,iibU--I#뺝g'r|vw/^= z`:֓We4+<Xd ԦJØVzw]*ip<-0?~ټʉǠ`ƶ3a[2m6O(]QH+FeE)=zjPN2.Y1.i^_d*.Se-"FFJz &z\t):c1kaN0tRe/.y7KTKd88NSwj* 4) `BIoc  xM0P($CY,f1 ԁsITVjnw{Ghcq/:bN jA9ǂjk)h[P+nTiF#ʢۢ6b7ow r)ʐLCS&%YP\qmnr`1ʤ:4Z\1L!I9ο>D[S9dL6'%$BY1;f)}g8TRԢs67,e H )/`yV[QDsNy0}\y0}X5;8 /%,=#5XcsϘRjB*K)v"6+Q䉚ܒ#jNr ]ʝ?gPʫmO3=.{z68'oᔲ'l{o41V-|5`x *6b7 mMF1po$ʀ=FJnqbdLV!Y٣xkH#,C^N+K'>P%T3RTJpK/'LF;L!^EJ!^E5x̄s&ݞkovQڭˍ*uV[V2Q:2JRrʹ3vl^7H2fXeJY](@D@%=LµrjX7kI+ 6°e`nE0HMLgOo^WfT܌EX܌i?)k3\/eZ!:*:A:Vx#֡Rc,L9QЂ3k*AӗQ}1#1P13o,gSV$OYQZJ>/߄ԧfQ4ɛ?;dIw'Y%#bA2輦{JYܵh M1XjEp%g1+o(Œ8 5QE"ش]BT nj"iLE cPp.)ʁM 62(@])m^ P O7qid6;ld_5VSѥ八xB}L qO6}s149UXեRZO  _ Z,tOAw=Lc:w*D%bA3{'25{z.iCq%q2%þ'sq2gqgL]=$RH^|&rgLj Pp3b FoǑqh{v(A[fYh"L*Be.):U#b烨Kq|g d5YE30 !JxF"HN(;~`-z. E˒Fv>d.9n>^_·|·|·9TW[<+}̧GXE![xMdAEHkM"t{f|P:Pt?, ?JYW͚=VRlmICh 0W5 "$%BDh6WPZPo E1"Zh~kmnpwE6 AY@FI噖hGzcM@%.@Ix~)oJK \9+D惙.u惙f>`惋djNHR] ӷ7 [Pm&f !`tPQ*X񻥃3 JR^zti}[TwIUAJ\%* 9DTWXn>n'!1VȄO1a{c, YrLc)HC^` )_&e"{gi[y=Kٳ=Kٳ=K !\E'ӷrf/^ovWΰ1F6pRx.G;ϫ(b-ln{fSG׳+#8/g}s:W`'Yi" Y=BhmhDT!k;iA* ћW߽'T4O jN^eup=zjӱMH'("I+N~g֞kV\|QSA9HHuC&9-Txg2j+@9Gk P>ޚPys:ng*X3FFd'\&)oЭRfPtZsuZ|>|nG*:wq,)R]m@r>S*D2 Yiw(hRhnTUOUWה!+d}|T%IPH|qo*^}?#/NM•7%8'͓ɪfP2 UdkY]UY\= T#z8*T\XUM䀢ch:HRج"ȳ@}.Uh{3"oj%e?o2۪:έ?x7]_+E'ڲS0)J.RTrUe 6:(ե/27Vk@*ǤW~T{jPp?[RJ"ʃAV&+ ^P`LAצ+' *Tx(~lp8>lS2JB1* >݋7:ކ'kn6OJe5d{6D]^cdY+F#^2᛻'& BRz/R[u];YZ?op5Пo^~b4cwf/ӣt81Aۭ;zmv {{yTÍK~p 1p:}X)-j"V#`o?: 7\߼2vUXā7NsK T.CS6x^[V;rVH;g9%܅f,y{C6L-.ں9KwrSpf"ZA%Ff<<,lYCY{fܮw [i-ڝwi}?j3"dž蜍;N UxҴȁɴTR=Oc9m:" ;E`E3Jڝ8w4$^ySglPvbLM j(78y]} t/>NoGW.- &\/;BM\#W -}i}\U@΄3T*0KђU1GKg4X6S[Bܟ.5R lN?K+r ;8)R psa?__NFMV4$!dk罒Ʀ71Pa$ιE%2+^p{75hWh`s)8{7?AS~~-D)Z(·{)qh-k+nbĖ|̄DeDNa [B%uBG3sU@$ #yP]FCV >)0D-2-[k4T5"OV@+ "7dJ5-JF|G>\&71>DcQhu!8&PBsK#RKn7D\Hzsn)z(-]&٪W*^A") K{i8[Ѻ! )[_KhJcv[wђ&yʞYQ23L'SB6y\J{?˃KD#7'X; ./U1"Mj$)WG,&xaC' 4^ k.5N9|i3'vOxG{&#;Nz.{8$hLU)PZX.i+eS9:(0CZdJv[b0A%6:`(K]S4㹠WԘ`◀{dHm$x SF ܯYI20]AO%5iN(1z 14Hީv lG$NaYi iq,< ^BYG2½kVȸi;nrw[!G 5LȎjƤN,@gb.^w$Y 9LwLqYa /OuBssh$Gpջn'Ǵ SH =d^*t~טۓ_3Cpś}WSl6Tκ z}oM.O6MVWooޮ_W?oGm %k?|*boVb<{Yvu]ۛ~y\mUU%c?(^ZIt3wfړN/NxLeG-kf@>[/4>yȷ7]y{̜<, >le^nL9:pdi&߬GP(4i>NQ%Hdn޶\ IUZguEKf#7nɞvoe]M]/V/.VXכ`/6U/鰇M㉲z{ތMVU}!~@+u\I?;m{=þK:|ѳX]oA4T7mNWflk5zשdű?K$KAw{ ~ CS5"Bw+Rކ-t^i5#>m~ή^ ^o~ nP&~4\fY͵LzwcYL|^\B~iks=׿~ohW%Ԍ[X0}{DF?O ۑ#ZixMh @tJ-:0PfJQ΁u/YWS9!bלT&K {VZ?ZSJ2RLu @i{TG@sV=^Yؐ[K 4*&'XX#Zj!r2R(R3lR [φλ&J@k3kV~/XC J4~5:hg* ;{MnE=&p?%dC:^`'S/nn9ga }sak8 1H ¼?yԘ x6ug){ tˊ3ORBnCǧ7Oab7i٩>}瘂9V.X#l|vjLVX ZOm=g90Nhr=YvKdm?g nϳ_f`Y$zs;f`VJ|,HB^c?n I#ot kt6I[W2UtnI"i{*?F1C(^>g̻ c8C,-sYHens#+\z&uYMaRLڭOgO !SFh23Ԙ.CY?џC~~Wyߝ"g-Tr]Jkj@OU:{PjmJio&*p ,HEX!un몊 ULB۶M8C> yg#&kIRl=%۟/Yz+G[e2gyR Z^aZB@r:^p`ml)!YUaUC5C%̌fKR%%A.0u/˰I__|ܞ{ 70ٻ'#HpCA!H) HYiʕXudScmGzpKU[ E}|e,]cqFX{IvϘ;Oߏd&Je= iE f1<5sql G lU,YknK SJᔆe+B**% $ab@KG'8=A\ K<&\ O\^\ 9?SAKpTxҙ#=1?~9AUyq)G sbуV>VPqƙxZ'w et5!Y3lX`@ɉ=\a)1>Q$Ca0Da0B).8 y**TjX`,˾1q!Ru$fT 6[2f7.Ezy5F.mVs0]e`;]`܆9]F[1e 9$w<U } 8a JX] sh>@to9"x hhu 0ZGmn(#GH* AK~Z阞r8њiN˒  6s$dCK{R{s4\[BfN m{`s#aCA̔ F|g`K{V!K͎sۭyUQ\%i 'bhnX?>pGs5GԠ:(=_EzpH-:M{Ui~d tL׮A$Ծ*KJvOb~i'q w r)pX_z&{ '< #g23iAcEcT7[ &Zͭ;Ơa%R=12h{`4u,h%~dinf@]*=th[zw8Kn5=QcGB6J] ttJ\_į/_'*-lhoNݼ_Ҫd",>K /ɷeGXb%~ȅ.r]nͺ*[n<UlF'0:%֭F,`q}zXr+;-}T;cőxA$V҆y;fqN9HW>JJT6~ :s?t81IӺSS@KEO R=RV\Ԟt ?@e > W/ݿ/bߩng.;C8jݪeI7ܪʸDsL6a˧k1.q9b/Mt~zs@ܞ~`AM1_iJluya)j3dhv"Ńzm*Y4LSV;_o &D;mKoQ W9%j(Fu:,8XtCV*Ebrv脻 NxzⳣǸYH tfί>|KCV+׫$GWmp~*fqF(%yd]OFP&:x h=9,2YXVtT#v˰e.T ;.jR%AB"7>\C>@O.\G!5GbUH!<%iIZ|)A()I\W1Ʃ#bRbp%ZԪYeG<ؐ$^AؚDrax\ukK(ng^-ܷv덍F}\/kOn.G:sDR̪QFk ==_lQ|EAC Guīyin"ƽ(P *+L( n7Wd7S7> _O,K*"'k3^M8b,mpaf{w`؂p~.@oe[&Ej}j{-Rk C}2DǬK?d=($ɧ%?/v뫇!B}RΖ{%2%0MGA꨷ۓu ~ 0rĂ׻50㖿 bpʿrA/ \I/siK? lb3`2`lG,itIKv6ubbK2bHV  фFp_44-:^jVrp u%x$ 8 l ):<~v{wÅ(!NS?:2P@͵Rf^;Ί"&j7n@Z XRJ]")G3v cqҮVOZVGnniw1]ikM03eϤ,^` e_Y`BX2EZݼuYD+&o "&6 ~ CIL%;kqC!@!_Y%)T9FD _tI ڭY8^Z[[Ck>2ͩPQdXINX36FY *j~ SM6bpĎFv3r$)d\;ݢ@e3DJeD" P &2+#mH}vu$*w.#? z/(6j_]֨?j{qRGWS[SbjZSu S`T˥dB,JkBKm&HZͩ [SZ䅞6bH Zyۏ{V $G@ǩg;u7gc[C}kz&uZڽ̗*Oz:J=ٍc9|=kh ЎaSN{߭8w$w'w5V YӪ᧳uX?޸6ٱ;s{7g3<2[_J?E;]nK+ l?4 vpɎF|]`魉{;rPPSכk|5;9q_V zIj,[ѡv2VW-uxtWv^on+~?YٿZc>ĵܜQ[g4 sIӝsFDRy"sSgXe|MfuwR^ٯ9dG 7gOoMX"WLn>kd~e-I_<@S@p &EHuq`Kr0 } 5`6>,?kUϡ<gOĴar57BϧLᘾ;Ax;s=k9U/gKa5Xj]t7|9 *1?7bP76Ә`_79gE>9ҟt8?t1?{]?ɢd9;?[;mGA)8P&\SjЌD E8qxwȴsB` ]wFv)/:ܞ *U;|(5˄,# Y^`)nm "-e!GBbwM8ڪwjܳ9C꿡wr&_ غ (2൓l#4(6I[dGDxm!+4ܺda<4AD4CibjE`dF tgsI2 ;M )lDhBtVB@);P5#J1["1% C&hUp]8e'8d&K@p׋}"I׬P0cr*TeiX sf(@kds!1\.`"EmiĹlk m-mמX܄]~_ N})ONq4)l?T\T3tg9{n&1[|C8hiz,zHmShYms1ijID`<e*rn~r>EűgwW&o*EsF@5r"ܟLLK$E(T9lD;"𺖛ihwzG[D%nhXUEc(`c.6SL&j+8:J ` DfVQH[D  px e),Q.i5FH(gHH31)398/$9҉Y @Ʊu  Uˆn-%4̂Zr-[" vmlߠ {X=W|g;Jk<$7ήSˀddͽ#[(m6IdI _'BrT.2<y-c dOxK]cpNWa`wt=޷_5๯v\nᆠYxk)Q^ɪ ȓڀ>fVi,b/j:4j l˙q'B0Nq/D0E:/mh![@ H8%9!Lu)R9Ū7lv< ughLi$ s@J(#0V "@h &#a"X eHrvaZbhB~ۯ+޴nR;Uc 5鎯jUmwӿuEvnr,`p*PF] ^P>6/*[0*(ϙ'( D/#am.dWzH loj$"1m 1 S\1BUI QDc%D!fq[2ieH!4F * җ n|Ȟwy0~ūl[fR-%6}><)sH߯hDUЖscg O~_gΨ G)#&eϲ$K?!Mdp~g R,[k-<qY?I[<$B8RI 6y7 pƭʀѴ" w$nR2b ЪEFov& 6 gQ;g|,3WDO&Oͫd:36xI2)В-ް Cݭwww=2i4Q>n&G5^(Fۮdݕ;\Twn4RHFl̵9_bSS u$s 2K;k3JDR!9Es9EM^\7kb(+zJa:Z2 L<-st3z-%X_ɪEP hȑhNLhKhR1QwԱnYHHք&T[uA[*1:m |Hքq:EANNM>*tٵkd˟Ѵa˟ 狳jxW-gŤbaj7 f>ٍz=W9mtFla ;Nݴs!i!],7>I/JzZd>3 vq4) R$8۷4Rb"HNt\w=7%>PE(#0A*Ϝ0kHAl;!WD@Ƽfmquˑ?fQ- ~Ѥ^@m(jVmA, H(wUd1#5{-ƫthU͎RAT/s]#f rGhd[gvk0 Pzvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000003655261515136063334017716 0ustar rootrootJan 27 06:53:42 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 06:53:42 crc restorecon[4594]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 06:53:42 crc restorecon[4594]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:53:43 crc restorecon[4594]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:53:43 crc restorecon[4594]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 06:53:43 crc kubenswrapper[4872]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 06:53:43 crc kubenswrapper[4872]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 06:53:43 crc kubenswrapper[4872]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 06:53:43 crc kubenswrapper[4872]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 06:53:43 crc kubenswrapper[4872]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 06:53:43 crc kubenswrapper[4872]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.861810 4872 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870414 4872 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870473 4872 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870478 4872 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870483 4872 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870487 4872 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870492 4872 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870497 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870502 4872 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870506 4872 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870511 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870517 4872 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870522 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870528 4872 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870533 4872 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870539 4872 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870543 4872 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870547 4872 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870551 4872 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870555 4872 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870559 4872 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870562 4872 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870566 4872 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870570 4872 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870573 4872 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870577 4872 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870580 4872 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870584 4872 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870588 4872 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870592 4872 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870595 4872 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870599 4872 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870613 4872 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870617 4872 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870621 4872 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870625 4872 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870628 4872 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870632 4872 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870636 4872 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870639 4872 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870643 4872 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870648 4872 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870651 4872 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870655 4872 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870658 4872 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870663 4872 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870667 4872 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870672 4872 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870676 4872 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870679 4872 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870683 4872 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870686 4872 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870690 4872 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870694 4872 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870698 4872 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870704 4872 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870708 4872 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870711 4872 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870715 4872 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870719 4872 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870722 4872 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870726 4872 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870729 4872 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870734 4872 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870739 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870745 4872 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870750 4872 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870755 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870760 4872 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870766 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870771 4872 feature_gate.go:330] unrecognized feature gate: Example Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.870775 4872 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872244 4872 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872271 4872 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872282 4872 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872289 4872 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872299 4872 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872305 4872 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872315 4872 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872323 4872 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872331 4872 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872336 4872 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872343 4872 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872350 4872 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872356 4872 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872362 4872 flags.go:64] FLAG: --cgroup-root="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872368 4872 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872374 4872 flags.go:64] FLAG: --client-ca-file="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872379 4872 flags.go:64] FLAG: --cloud-config="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872384 4872 flags.go:64] FLAG: --cloud-provider="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872389 4872 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872394 4872 flags.go:64] FLAG: --cluster-domain="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872399 4872 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872403 4872 flags.go:64] FLAG: --config-dir="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872408 4872 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872413 4872 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872420 4872 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872424 4872 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872430 4872 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872434 4872 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872439 4872 flags.go:64] FLAG: --contention-profiling="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872443 4872 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872448 4872 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872453 4872 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872458 4872 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872465 4872 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872470 4872 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872476 4872 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872481 4872 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872486 4872 flags.go:64] FLAG: --enable-server="true" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872490 4872 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872509 4872 flags.go:64] FLAG: --event-burst="100" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872521 4872 flags.go:64] FLAG: --event-qps="50" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872525 4872 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872529 4872 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872534 4872 flags.go:64] FLAG: --eviction-hard="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872545 4872 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872549 4872 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872553 4872 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872558 4872 flags.go:64] FLAG: --eviction-soft="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872563 4872 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872567 4872 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872571 4872 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872575 4872 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872579 4872 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872584 4872 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872588 4872 flags.go:64] FLAG: --feature-gates="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872594 4872 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872599 4872 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872605 4872 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872609 4872 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872615 4872 flags.go:64] FLAG: --healthz-port="10248" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872620 4872 flags.go:64] FLAG: --help="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872625 4872 flags.go:64] FLAG: --hostname-override="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872629 4872 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872633 4872 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872637 4872 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872642 4872 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872646 4872 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872651 4872 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872655 4872 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872660 4872 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872664 4872 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872668 4872 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872673 4872 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872677 4872 flags.go:64] FLAG: --kube-reserved="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872681 4872 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872685 4872 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872690 4872 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872694 4872 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872698 4872 flags.go:64] FLAG: --lock-file="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872702 4872 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872706 4872 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872710 4872 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872717 4872 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872721 4872 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872725 4872 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872730 4872 flags.go:64] FLAG: --logging-format="text" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872734 4872 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872738 4872 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872742 4872 flags.go:64] FLAG: --manifest-url="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872746 4872 flags.go:64] FLAG: --manifest-url-header="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872753 4872 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872758 4872 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872763 4872 flags.go:64] FLAG: --max-pods="110" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872767 4872 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872771 4872 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872775 4872 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872780 4872 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872784 4872 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872788 4872 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872792 4872 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872807 4872 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872811 4872 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872815 4872 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872819 4872 flags.go:64] FLAG: --pod-cidr="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872823 4872 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872831 4872 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872835 4872 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872866 4872 flags.go:64] FLAG: --pods-per-core="0" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872871 4872 flags.go:64] FLAG: --port="10250" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872875 4872 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872880 4872 flags.go:64] FLAG: --provider-id="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872884 4872 flags.go:64] FLAG: --qos-reserved="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872888 4872 flags.go:64] FLAG: --read-only-port="10255" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872892 4872 flags.go:64] FLAG: --register-node="true" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872896 4872 flags.go:64] FLAG: --register-schedulable="true" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872901 4872 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872909 4872 flags.go:64] FLAG: --registry-burst="10" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872914 4872 flags.go:64] FLAG: --registry-qps="5" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872918 4872 flags.go:64] FLAG: --reserved-cpus="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872922 4872 flags.go:64] FLAG: --reserved-memory="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872929 4872 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872933 4872 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872939 4872 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872943 4872 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872948 4872 flags.go:64] FLAG: --runonce="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872952 4872 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872956 4872 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872961 4872 flags.go:64] FLAG: --seccomp-default="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872965 4872 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872969 4872 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872973 4872 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872978 4872 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872983 4872 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872988 4872 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872993 4872 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.872998 4872 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873004 4872 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873009 4872 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873015 4872 flags.go:64] FLAG: --system-cgroups="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873020 4872 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873029 4872 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873033 4872 flags.go:64] FLAG: --tls-cert-file="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873038 4872 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873044 4872 flags.go:64] FLAG: --tls-min-version="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873049 4872 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873054 4872 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873058 4872 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873064 4872 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873070 4872 flags.go:64] FLAG: --v="2" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873080 4872 flags.go:64] FLAG: --version="false" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873088 4872 flags.go:64] FLAG: --vmodule="" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873097 4872 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873103 4872 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873225 4872 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873230 4872 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873235 4872 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873240 4872 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873244 4872 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873248 4872 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873252 4872 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873256 4872 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873260 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873266 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873270 4872 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873274 4872 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873280 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873284 4872 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873288 4872 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873293 4872 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873297 4872 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873302 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873307 4872 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873311 4872 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873315 4872 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873320 4872 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873324 4872 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873328 4872 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873332 4872 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873336 4872 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873341 4872 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873344 4872 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873348 4872 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873352 4872 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873356 4872 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873360 4872 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873364 4872 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873368 4872 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873371 4872 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873375 4872 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873379 4872 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873384 4872 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873389 4872 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873394 4872 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873398 4872 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873403 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873407 4872 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873412 4872 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873416 4872 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873421 4872 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873425 4872 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873431 4872 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873437 4872 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873443 4872 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873449 4872 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873455 4872 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873460 4872 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873464 4872 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873470 4872 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873474 4872 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873479 4872 feature_gate.go:330] unrecognized feature gate: Example Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873484 4872 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873488 4872 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873493 4872 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873497 4872 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873501 4872 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873507 4872 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873511 4872 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873515 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873519 4872 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873524 4872 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873529 4872 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873534 4872 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873539 4872 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.873542 4872 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.873559 4872 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.884320 4872 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.884384 4872 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884518 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884541 4872 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884556 4872 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884567 4872 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884577 4872 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884587 4872 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884597 4872 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884607 4872 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884617 4872 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884626 4872 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884634 4872 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884643 4872 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884651 4872 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884660 4872 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884668 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884677 4872 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884685 4872 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884694 4872 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884702 4872 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884714 4872 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884725 4872 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884734 4872 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884744 4872 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884752 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884761 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884770 4872 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884779 4872 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884787 4872 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884797 4872 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884809 4872 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884820 4872 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884831 4872 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884864 4872 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884874 4872 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884885 4872 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884894 4872 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884903 4872 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884911 4872 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884919 4872 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884927 4872 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884936 4872 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884945 4872 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884953 4872 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884961 4872 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884969 4872 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884978 4872 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884986 4872 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.884995 4872 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885004 4872 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885012 4872 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885020 4872 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885028 4872 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885036 4872 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885045 4872 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885053 4872 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885062 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885070 4872 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885079 4872 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885089 4872 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885097 4872 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885106 4872 feature_gate.go:330] unrecognized feature gate: Example Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885114 4872 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885122 4872 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885131 4872 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885139 4872 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885147 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885156 4872 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885164 4872 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885173 4872 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885181 4872 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885191 4872 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.885205 4872 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885443 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885457 4872 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885466 4872 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885475 4872 feature_gate.go:330] unrecognized feature gate: Example Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885484 4872 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885492 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885500 4872 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885509 4872 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885517 4872 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885529 4872 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885540 4872 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885549 4872 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885560 4872 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885571 4872 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885580 4872 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885589 4872 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885598 4872 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885606 4872 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885615 4872 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885625 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885634 4872 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885642 4872 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885650 4872 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885659 4872 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885667 4872 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885676 4872 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885684 4872 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885692 4872 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885700 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885708 4872 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885717 4872 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885726 4872 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885737 4872 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885746 4872 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885756 4872 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885765 4872 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885774 4872 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885783 4872 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885791 4872 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885802 4872 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885813 4872 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885823 4872 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885831 4872 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885865 4872 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885875 4872 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885884 4872 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885894 4872 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885903 4872 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885912 4872 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885921 4872 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885931 4872 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885940 4872 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885949 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885957 4872 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885966 4872 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885975 4872 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885984 4872 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.885992 4872 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.886000 4872 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.886008 4872 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.886016 4872 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.886026 4872 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.886034 4872 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.886043 4872 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.886051 4872 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.886060 4872 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.886069 4872 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.886077 4872 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.886086 4872 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.886094 4872 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 06:53:43 crc kubenswrapper[4872]: W0127 06:53:43.886103 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.886117 4872 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.886341 4872 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.892359 4872 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.892510 4872 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.894541 4872 server.go:997] "Starting client certificate rotation" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.894610 4872 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.895898 4872 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-24 06:55:57.944213633 +0000 UTC Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.895998 4872 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.921596 4872 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.925430 4872 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 06:53:43 crc kubenswrapper[4872]: E0127 06:53:43.925867 4872 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.948009 4872 log.go:25] "Validated CRI v1 runtime API" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.987928 4872 log.go:25] "Validated CRI v1 image API" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.989702 4872 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.995189 4872 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-06-48-05-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 06:53:43 crc kubenswrapper[4872]: I0127 06:53:43.995232 4872 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.008685 4872 manager.go:217] Machine: {Timestamp:2026-01-27 06:53:44.005820274 +0000 UTC m=+0.533295490 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:91b3bc63-e466-472c-acc7-7b74e49fca03 BootID:37e22313-b71b-4ef4-bf05-eb3dbac65b5b Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:39:ac:03 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:39:ac:03 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d2:44:dc Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:1d:05:7b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:14:2a:3d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:43:18:24 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ca:b5:f7:50:54:ea Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:c2:12:a6:d4:70:cb Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.008967 4872 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.009191 4872 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.009651 4872 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.009860 4872 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.009916 4872 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.010111 4872 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.010121 4872 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.010703 4872 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.010728 4872 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.011517 4872 state_mem.go:36] "Initialized new in-memory state store" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.011639 4872 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.014970 4872 kubelet.go:418] "Attempting to sync node with API server" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.014999 4872 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.015016 4872 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.015029 4872 kubelet.go:324] "Adding apiserver pod source" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.015040 4872 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.019286 4872 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 06:53:44 crc kubenswrapper[4872]: W0127 06:53:44.020225 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Jan 27 06:53:44 crc kubenswrapper[4872]: W0127 06:53:44.020239 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Jan 27 06:53:44 crc kubenswrapper[4872]: E0127 06:53:44.020315 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:53:44 crc kubenswrapper[4872]: E0127 06:53:44.020330 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.020722 4872 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.023065 4872 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.024871 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.024968 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.025032 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.025081 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.025136 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.025190 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.025241 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.025293 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.025345 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.025415 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.025487 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.025536 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.026585 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.027406 4872 server.go:1280] "Started kubelet" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.028182 4872 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Jan 27 06:53:44 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.033307 4872 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.033836 4872 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.034583 4872 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.037613 4872 server.go:460] "Adding debug handlers to kubelet server" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.041007 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.041077 4872 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.041103 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 00:02:44.497703359 +0000 UTC Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.041412 4872 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.041423 4872 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 06:53:44 crc kubenswrapper[4872]: E0127 06:53:44.042296 4872 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.042721 4872 factory.go:55] Registering systemd factory Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.042740 4872 factory.go:221] Registration of the systemd container factory successfully Jan 27 06:53:44 crc kubenswrapper[4872]: W0127 06:53:44.043541 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Jan 27 06:53:44 crc kubenswrapper[4872]: E0127 06:53:44.043603 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.043643 4872 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.046220 4872 factory.go:153] Registering CRI-O factory Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.046247 4872 factory.go:221] Registration of the crio container factory successfully Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.046395 4872 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.046432 4872 factory.go:103] Registering Raw factory Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.046586 4872 manager.go:1196] Started watching for new ooms in manager Jan 27 06:53:44 crc kubenswrapper[4872]: E0127 06:53:44.046945 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="200ms" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.050051 4872 manager.go:319] Starting recovery of all containers Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059370 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059428 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059447 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059500 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059518 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059533 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059547 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059561 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059577 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059593 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059607 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059624 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059673 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059693 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059708 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059724 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059739 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059754 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059768 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059785 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059802 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059876 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059890 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059904 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059920 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059935 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059955 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.059975 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060034 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060052 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060065 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060084 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060099 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060112 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060128 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060141 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060155 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060205 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060222 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060239 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060254 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060269 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060282 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060297 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060311 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060324 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060375 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060395 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060409 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060423 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060438 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060451 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060470 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060486 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060500 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060550 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060567 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060585 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060600 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060615 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060630 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060642 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060655 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060670 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060707 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060724 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060741 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060755 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060768 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.060782 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061370 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061395 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061409 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061430 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061446 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061461 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061476 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061492 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061511 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061532 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061553 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061579 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061593 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061607 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061621 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061635 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061650 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061665 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061678 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061694 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061707 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061722 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061739 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061755 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061773 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061793 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061807 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061821 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061835 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061872 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061890 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061905 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061919 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061935 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061958 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061974 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.061993 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062014 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062035 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062057 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062077 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062095 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062111 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062132 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062173 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062192 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062210 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062228 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062245 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062261 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062276 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062289 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062303 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062317 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062329 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062343 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062356 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062374 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062388 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062401 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062415 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062429 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062493 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062513 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062525 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062538 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062553 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062567 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062581 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062598 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062611 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062626 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062642 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062657 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062670 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062682 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062696 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062708 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062724 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062737 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062756 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062769 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062782 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062796 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062808 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062821 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062834 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062865 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062878 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062893 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062906 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062921 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062935 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062948 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062961 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.062974 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.063176 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065011 4872 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065043 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065061 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065080 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065100 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065120 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065139 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065154 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065172 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: E0127 06:53:44.055559 4872 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e83fc7b554102 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 06:53:44.027365634 +0000 UTC m=+0.554840830,LastTimestamp:2026-01-27 06:53:44.027365634 +0000 UTC m=+0.554840830,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065189 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065270 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065335 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065353 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065371 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065385 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065402 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065417 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065432 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065446 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065461 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065476 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065495 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065512 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065528 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065543 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065610 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065640 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065663 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065680 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065696 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065711 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065729 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065744 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065764 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065779 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065795 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065810 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065825 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065860 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065877 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065891 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065905 4872 reconstruct.go:97] "Volume reconstruction finished" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.065916 4872 reconciler.go:26] "Reconciler: start to sync state" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.081507 4872 manager.go:324] Recovery completed Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.094902 4872 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.096784 4872 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.096828 4872 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.096874 4872 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 06:53:44 crc kubenswrapper[4872]: E0127 06:53:44.096924 4872 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 06:53:44 crc kubenswrapper[4872]: W0127 06:53:44.097648 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Jan 27 06:53:44 crc kubenswrapper[4872]: E0127 06:53:44.097704 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.102174 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.109641 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.109681 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.109694 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.110664 4872 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.110681 4872 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.110704 4872 state_mem.go:36] "Initialized new in-memory state store" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.131546 4872 policy_none.go:49] "None policy: Start" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.132682 4872 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.132713 4872 state_mem.go:35] "Initializing new in-memory state store" Jan 27 06:53:44 crc kubenswrapper[4872]: E0127 06:53:44.143049 4872 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.189222 4872 manager.go:334] "Starting Device Plugin manager" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.189301 4872 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.189320 4872 server.go:79] "Starting device plugin registration server" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.189863 4872 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.189904 4872 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.190201 4872 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.190303 4872 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.190313 4872 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.197187 4872 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.197293 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:44 crc kubenswrapper[4872]: E0127 06:53:44.197951 4872 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.198611 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.198643 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.198654 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.198773 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.199092 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.199151 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.199417 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.199442 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.199451 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.199550 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.199745 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.199805 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.200220 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.200252 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.200259 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.200274 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.200285 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.200295 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.200414 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.200549 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.200696 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.201093 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.201122 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.201133 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.201211 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.201243 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.201283 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.201298 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.201622 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.201649 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.201654 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.201663 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.201688 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.202608 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.202642 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.202653 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.202876 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.202906 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.204556 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.204580 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.204591 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.204720 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.204741 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.204753 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:44 crc kubenswrapper[4872]: E0127 06:53:44.248460 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="400ms" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.267374 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.267420 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.267443 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.267463 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.267483 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.267503 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.267523 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.267542 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.267562 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.267581 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.267599 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.267622 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.267644 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.267665 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.267700 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.291899 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.293544 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.293697 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.293772 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.293880 4872 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 06:53:44 crc kubenswrapper[4872]: E0127 06:53:44.294524 4872 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.368662 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.368736 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.368785 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.368802 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.368959 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.368819 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369054 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369019 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369072 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369143 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.368978 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369104 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369192 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369206 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369229 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369238 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369253 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369283 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369296 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369310 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369317 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369356 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369375 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369381 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369394 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369449 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369473 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369489 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369506 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.369566 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.495103 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.497049 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.497102 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.497113 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.497144 4872 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 06:53:44 crc kubenswrapper[4872]: E0127 06:53:44.497734 4872 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.540330 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.566487 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.574696 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.599084 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.605517 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:53:44 crc kubenswrapper[4872]: W0127 06:53:44.617277 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-c2bc0e6b0259e80fa99f3fd0717d6ae18263fde47d81a045a9e86e5459833f7f WatchSource:0}: Error finding container c2bc0e6b0259e80fa99f3fd0717d6ae18263fde47d81a045a9e86e5459833f7f: Status 404 returned error can't find the container with id c2bc0e6b0259e80fa99f3fd0717d6ae18263fde47d81a045a9e86e5459833f7f Jan 27 06:53:44 crc kubenswrapper[4872]: W0127 06:53:44.617656 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-23cb9535db3157a2f47e2d80d5e28a48263183fef966861ce35e1c91c6f94611 WatchSource:0}: Error finding container 23cb9535db3157a2f47e2d80d5e28a48263183fef966861ce35e1c91c6f94611: Status 404 returned error can't find the container with id 23cb9535db3157a2f47e2d80d5e28a48263183fef966861ce35e1c91c6f94611 Jan 27 06:53:44 crc kubenswrapper[4872]: W0127 06:53:44.618452 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-dd11cb58d138eeff43f82f1d21b7e224723cdcb6f9c65a6ccb6c6004299386fe WatchSource:0}: Error finding container dd11cb58d138eeff43f82f1d21b7e224723cdcb6f9c65a6ccb6c6004299386fe: Status 404 returned error can't find the container with id dd11cb58d138eeff43f82f1d21b7e224723cdcb6f9c65a6ccb6c6004299386fe Jan 27 06:53:44 crc kubenswrapper[4872]: W0127 06:53:44.630567 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-9180d8144d137e6e03e1666385062001a062653d24df59002fea7c715a3b9227 WatchSource:0}: Error finding container 9180d8144d137e6e03e1666385062001a062653d24df59002fea7c715a3b9227: Status 404 returned error can't find the container with id 9180d8144d137e6e03e1666385062001a062653d24df59002fea7c715a3b9227 Jan 27 06:53:44 crc kubenswrapper[4872]: W0127 06:53:44.644386 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-88bdd79470598e508b900d1e16dbb1688a3a9ecc8801bbe6bc812e81ea6c6c6d WatchSource:0}: Error finding container 88bdd79470598e508b900d1e16dbb1688a3a9ecc8801bbe6bc812e81ea6c6c6d: Status 404 returned error can't find the container with id 88bdd79470598e508b900d1e16dbb1688a3a9ecc8801bbe6bc812e81ea6c6c6d Jan 27 06:53:44 crc kubenswrapper[4872]: E0127 06:53:44.649747 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="800ms" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.898799 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.900244 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.900290 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.900300 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:44 crc kubenswrapper[4872]: I0127 06:53:44.900337 4872 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 06:53:44 crc kubenswrapper[4872]: E0127 06:53:44.900920 4872 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Jan 27 06:53:45 crc kubenswrapper[4872]: W0127 06:53:45.025457 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Jan 27 06:53:45 crc kubenswrapper[4872]: E0127 06:53:45.025582 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:53:45 crc kubenswrapper[4872]: I0127 06:53:45.034817 4872 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Jan 27 06:53:45 crc kubenswrapper[4872]: I0127 06:53:45.041868 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 17:47:51.468558538 +0000 UTC Jan 27 06:53:45 crc kubenswrapper[4872]: I0127 06:53:45.102308 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"88bdd79470598e508b900d1e16dbb1688a3a9ecc8801bbe6bc812e81ea6c6c6d"} Jan 27 06:53:45 crc kubenswrapper[4872]: I0127 06:53:45.103859 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9180d8144d137e6e03e1666385062001a062653d24df59002fea7c715a3b9227"} Jan 27 06:53:45 crc kubenswrapper[4872]: I0127 06:53:45.105625 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"23cb9535db3157a2f47e2d80d5e28a48263183fef966861ce35e1c91c6f94611"} Jan 27 06:53:45 crc kubenswrapper[4872]: I0127 06:53:45.107143 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dd11cb58d138eeff43f82f1d21b7e224723cdcb6f9c65a6ccb6c6004299386fe"} Jan 27 06:53:45 crc kubenswrapper[4872]: I0127 06:53:45.108252 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c2bc0e6b0259e80fa99f3fd0717d6ae18263fde47d81a045a9e86e5459833f7f"} Jan 27 06:53:45 crc kubenswrapper[4872]: W0127 06:53:45.276648 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Jan 27 06:53:45 crc kubenswrapper[4872]: E0127 06:53:45.276718 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:53:45 crc kubenswrapper[4872]: W0127 06:53:45.440881 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Jan 27 06:53:45 crc kubenswrapper[4872]: E0127 06:53:45.440996 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:53:45 crc kubenswrapper[4872]: E0127 06:53:45.451343 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="1.6s" Jan 27 06:53:45 crc kubenswrapper[4872]: W0127 06:53:45.479396 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Jan 27 06:53:45 crc kubenswrapper[4872]: E0127 06:53:45.479522 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:53:45 crc kubenswrapper[4872]: I0127 06:53:45.702040 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:45 crc kubenswrapper[4872]: I0127 06:53:45.712748 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:45 crc kubenswrapper[4872]: I0127 06:53:45.712800 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:45 crc kubenswrapper[4872]: I0127 06:53:45.712814 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:45 crc kubenswrapper[4872]: I0127 06:53:45.712877 4872 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 06:53:45 crc kubenswrapper[4872]: E0127 06:53:45.713656 4872 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.034356 4872 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.042443 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:33:12.464534138 +0000 UTC Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.103882 4872 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 06:53:46 crc kubenswrapper[4872]: E0127 06:53:46.105363 4872 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.113526 4872 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354" exitCode=0 Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.113602 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354"} Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.113718 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.114919 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.114948 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.114959 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.117649 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb"} Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.117717 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0"} Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.117727 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e"} Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.117732 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.117740 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51"} Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.119700 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.119736 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.119749 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.121994 4872 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01" exitCode=0 Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.122091 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01"} Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.122116 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.122993 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.123014 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.123025 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.125157 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.125567 4872 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b0a309889410e408cfde09a06793a72c65c69b9812783fc7b1deb663d07ea400" exitCode=0 Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.125667 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b0a309889410e408cfde09a06793a72c65c69b9812783fc7b1deb663d07ea400"} Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.125682 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.131810 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.131861 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.131875 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.131916 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.131943 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.131955 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.137731 4872 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="4a2b7935217b139e04adab18e5710ef18d8e1c41200b39d489883921d83d3923" exitCode=0 Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.137774 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"4a2b7935217b139e04adab18e5710ef18d8e1c41200b39d489883921d83d3923"} Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.137811 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.138521 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.138545 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:46 crc kubenswrapper[4872]: I0127 06:53:46.138558 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:46 crc kubenswrapper[4872]: W0127 06:53:46.737718 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Jan 27 06:53:46 crc kubenswrapper[4872]: E0127 06:53:46.737972 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.034679 4872 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.043138 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 15:28:51.78395081 +0000 UTC Jan 27 06:53:47 crc kubenswrapper[4872]: E0127 06:53:47.053106 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="3.2s" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.142623 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb"} Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.142679 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9"} Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.142689 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8"} Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.144923 4872 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="98e1107d6bc151aa51fd301fc6617969004a4e72ccc7c145cc914e7b50c8e28b" exitCode=0 Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.145000 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"98e1107d6bc151aa51fd301fc6617969004a4e72ccc7c145cc914e7b50c8e28b"} Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.145406 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.148328 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.148368 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.148380 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.148709 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"960b06167d64b9abead49c9a5166906a94b96d16d2fc70ea4e52b5b0806aa9a2"} Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.148875 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.150218 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.150338 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.150350 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.155096 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.155585 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.155932 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bc80002d1a062927c4a493308999190edc5cadd30fd7fc5af2f7ed7920c3ede9"} Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.155969 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2bacf5c751024ce9db129ca3707df22d9f73e42434168cdda4e48145e96bcdd3"} Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.155981 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"76f7a74a15b1cbc8629163aad3dccab4a717289d1f60a110fc7eb138b270e3f3"} Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.156282 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.156308 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.156318 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.156976 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.156993 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.157004 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.314734 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.316051 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.316081 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.316091 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:47 crc kubenswrapper[4872]: I0127 06:53:47.316128 4872 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 06:53:47 crc kubenswrapper[4872]: E0127 06:53:47.316637 4872 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Jan 27 06:53:47 crc kubenswrapper[4872]: W0127 06:53:47.331650 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Jan 27 06:53:47 crc kubenswrapper[4872]: E0127 06:53:47.331878 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:53:47 crc kubenswrapper[4872]: W0127 06:53:47.424806 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Jan 27 06:53:47 crc kubenswrapper[4872]: E0127 06:53:47.424916 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.044129 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:59:18.815467576 +0000 UTC Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.161544 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343"} Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.161641 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805"} Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.161645 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.162674 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.162711 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.162722 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.163956 4872 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="47525a2a43bb595707e690c1bff883f08b95213e396d60d28794c7aba6e633b6" exitCode=0 Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.164009 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"47525a2a43bb595707e690c1bff883f08b95213e396d60d28794c7aba6e633b6"} Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.164112 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.164126 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.164158 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.164180 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.165343 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.165378 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.165390 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.165644 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.165669 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.165689 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.167657 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.167684 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.167694 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:48 crc kubenswrapper[4872]: I0127 06:53:48.946085 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:53:49 crc kubenswrapper[4872]: I0127 06:53:49.044605 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:08:52.745511095 +0000 UTC Jan 27 06:53:49 crc kubenswrapper[4872]: I0127 06:53:49.173111 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d4d7e498ad23b15a6b39f3af7bb31f9aa5ce5a4a630ae6d5f742522d057770dc"} Jan 27 06:53:49 crc kubenswrapper[4872]: I0127 06:53:49.173171 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6e7ed2e2a4017530386f9bd9e8278d0000ee396e297573a3bbbeb1cf4266b13a"} Jan 27 06:53:49 crc kubenswrapper[4872]: I0127 06:53:49.173188 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5d0e2a1336a9c07f3cdf109e66a47652d056fff7e31b2a683cb795310cfb0fc4"} Jan 27 06:53:49 crc kubenswrapper[4872]: I0127 06:53:49.173200 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b7c8efc561404cb0a4245d43c95d492c08a3024c2b622b318930f8ab2c7cd422"} Jan 27 06:53:49 crc kubenswrapper[4872]: I0127 06:53:49.173531 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:49 crc kubenswrapper[4872]: I0127 06:53:49.174675 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:49 crc kubenswrapper[4872]: I0127 06:53:49.174740 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:49 crc kubenswrapper[4872]: I0127 06:53:49.174755 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:49 crc kubenswrapper[4872]: I0127 06:53:49.548459 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:53:49 crc kubenswrapper[4872]: I0127 06:53:49.548723 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:49 crc kubenswrapper[4872]: I0127 06:53:49.550606 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:49 crc kubenswrapper[4872]: I0127 06:53:49.550775 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:49 crc kubenswrapper[4872]: I0127 06:53:49.550904 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.044903 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:21:52.911356351 +0000 UTC Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.183040 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.183716 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1898c6f24c628b207b666cf2db01394c39927a26008d384f52ee851b3d2df23d"} Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.184148 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.184348 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.184542 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.184571 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.186274 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.186338 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.186361 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.416167 4872 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.517493 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.519225 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.519417 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.519486 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.519582 4872 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.588041 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.588333 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.589718 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.589790 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:50 crc kubenswrapper[4872]: I0127 06:53:50.589802 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:51 crc kubenswrapper[4872]: I0127 06:53:51.046559 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:55:21.016208367 +0000 UTC Jan 27 06:53:51 crc kubenswrapper[4872]: I0127 06:53:51.186179 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:51 crc kubenswrapper[4872]: I0127 06:53:51.187487 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:51 crc kubenswrapper[4872]: I0127 06:53:51.187566 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:51 crc kubenswrapper[4872]: I0127 06:53:51.187594 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:51 crc kubenswrapper[4872]: I0127 06:53:51.516305 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 06:53:51 crc kubenswrapper[4872]: I0127 06:53:51.703757 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:53:51 crc kubenswrapper[4872]: I0127 06:53:51.704457 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:51 crc kubenswrapper[4872]: I0127 06:53:51.706115 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:51 crc kubenswrapper[4872]: I0127 06:53:51.706164 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:51 crc kubenswrapper[4872]: I0127 06:53:51.706184 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:51 crc kubenswrapper[4872]: I0127 06:53:51.726925 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 06:53:52 crc kubenswrapper[4872]: I0127 06:53:52.047913 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 15:02:10.06549313 +0000 UTC Jan 27 06:53:52 crc kubenswrapper[4872]: I0127 06:53:52.189749 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:52 crc kubenswrapper[4872]: I0127 06:53:52.191127 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:52 crc kubenswrapper[4872]: I0127 06:53:52.191167 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:52 crc kubenswrapper[4872]: I0127 06:53:52.191182 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:52 crc kubenswrapper[4872]: I0127 06:53:52.453119 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:53:52 crc kubenswrapper[4872]: I0127 06:53:52.453365 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:52 crc kubenswrapper[4872]: I0127 06:53:52.455141 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:52 crc kubenswrapper[4872]: I0127 06:53:52.455198 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:52 crc kubenswrapper[4872]: I0127 06:53:52.455218 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:53 crc kubenswrapper[4872]: I0127 06:53:53.048424 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:52:40.998540122 +0000 UTC Jan 27 06:53:53 crc kubenswrapper[4872]: I0127 06:53:53.079723 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:53:53 crc kubenswrapper[4872]: I0127 06:53:53.079984 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:53 crc kubenswrapper[4872]: I0127 06:53:53.081381 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:53 crc kubenswrapper[4872]: I0127 06:53:53.081534 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:53 crc kubenswrapper[4872]: I0127 06:53:53.081614 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:53 crc kubenswrapper[4872]: I0127 06:53:53.192801 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:53 crc kubenswrapper[4872]: I0127 06:53:53.194919 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:53 crc kubenswrapper[4872]: I0127 06:53:53.195014 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:53 crc kubenswrapper[4872]: I0127 06:53:53.195040 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:53 crc kubenswrapper[4872]: I0127 06:53:53.589087 4872 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 06:53:53 crc kubenswrapper[4872]: I0127 06:53:53.589244 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 06:53:54 crc kubenswrapper[4872]: I0127 06:53:54.048923 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:27:00.407840963 +0000 UTC Jan 27 06:53:54 crc kubenswrapper[4872]: E0127 06:53:54.198410 4872 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 06:53:54 crc kubenswrapper[4872]: I0127 06:53:54.919818 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:53:54 crc kubenswrapper[4872]: I0127 06:53:54.920068 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:54 crc kubenswrapper[4872]: I0127 06:53:54.921225 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:54 crc kubenswrapper[4872]: I0127 06:53:54.921254 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:54 crc kubenswrapper[4872]: I0127 06:53:54.921266 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:54 crc kubenswrapper[4872]: I0127 06:53:54.926264 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:53:55 crc kubenswrapper[4872]: I0127 06:53:55.050118 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:12:41.589871792 +0000 UTC Jan 27 06:53:55 crc kubenswrapper[4872]: I0127 06:53:55.197679 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:55 crc kubenswrapper[4872]: I0127 06:53:55.198050 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:53:55 crc kubenswrapper[4872]: I0127 06:53:55.199261 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:55 crc kubenswrapper[4872]: I0127 06:53:55.199334 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:55 crc kubenswrapper[4872]: I0127 06:53:55.199348 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:55 crc kubenswrapper[4872]: I0127 06:53:55.204407 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:53:56 crc kubenswrapper[4872]: I0127 06:53:56.051813 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 01:14:57.024625203 +0000 UTC Jan 27 06:53:56 crc kubenswrapper[4872]: I0127 06:53:56.200312 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:56 crc kubenswrapper[4872]: I0127 06:53:56.201881 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:56 crc kubenswrapper[4872]: I0127 06:53:56.201929 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:56 crc kubenswrapper[4872]: I0127 06:53:56.201940 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:57 crc kubenswrapper[4872]: I0127 06:53:57.052366 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:06:11.054902614 +0000 UTC Jan 27 06:53:57 crc kubenswrapper[4872]: I0127 06:53:57.202737 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:53:57 crc kubenswrapper[4872]: I0127 06:53:57.203769 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:53:57 crc kubenswrapper[4872]: I0127 06:53:57.203807 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:53:57 crc kubenswrapper[4872]: I0127 06:53:57.203824 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:53:58 crc kubenswrapper[4872]: I0127 06:53:58.035616 4872 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 27 06:53:58 crc kubenswrapper[4872]: I0127 06:53:58.053072 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:52:48.26744239 +0000 UTC Jan 27 06:53:58 crc kubenswrapper[4872]: I0127 06:53:58.121961 4872 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 06:53:58 crc kubenswrapper[4872]: I0127 06:53:58.122279 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 06:53:58 crc kubenswrapper[4872]: I0127 06:53:58.130238 4872 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 06:53:58 crc kubenswrapper[4872]: I0127 06:53:58.130323 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 06:53:59 crc kubenswrapper[4872]: I0127 06:53:59.053781 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:18:35.180176399 +0000 UTC Jan 27 06:54:00 crc kubenswrapper[4872]: I0127 06:54:00.054901 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 13:50:54.697230293 +0000 UTC Jan 27 06:54:01 crc kubenswrapper[4872]: I0127 06:54:01.056089 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 10:27:51.460692128 +0000 UTC Jan 27 06:54:01 crc kubenswrapper[4872]: I0127 06:54:01.713035 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:54:01 crc kubenswrapper[4872]: I0127 06:54:01.713343 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:54:01 crc kubenswrapper[4872]: I0127 06:54:01.716209 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:01 crc kubenswrapper[4872]: I0127 06:54:01.716243 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:01 crc kubenswrapper[4872]: I0127 06:54:01.716255 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:01 crc kubenswrapper[4872]: I0127 06:54:01.719380 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:54:01 crc kubenswrapper[4872]: I0127 06:54:01.760809 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 06:54:01 crc kubenswrapper[4872]: I0127 06:54:01.761135 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:54:01 crc kubenswrapper[4872]: I0127 06:54:01.763448 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:01 crc kubenswrapper[4872]: I0127 06:54:01.763539 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:01 crc kubenswrapper[4872]: I0127 06:54:01.763566 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:01 crc kubenswrapper[4872]: I0127 06:54:01.781406 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 06:54:02 crc kubenswrapper[4872]: I0127 06:54:02.056678 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 01:47:33.707350767 +0000 UTC Jan 27 06:54:02 crc kubenswrapper[4872]: I0127 06:54:02.214221 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:54:02 crc kubenswrapper[4872]: I0127 06:54:02.214866 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 06:54:02 crc kubenswrapper[4872]: I0127 06:54:02.214900 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:54:02 crc kubenswrapper[4872]: I0127 06:54:02.215627 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:02 crc kubenswrapper[4872]: I0127 06:54:02.215671 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:02 crc kubenswrapper[4872]: I0127 06:54:02.215685 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:02 crc kubenswrapper[4872]: I0127 06:54:02.216232 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:02 crc kubenswrapper[4872]: I0127 06:54:02.216372 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:02 crc kubenswrapper[4872]: I0127 06:54:02.216498 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.057356 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:25:18.066686587 +0000 UTC Jan 27 06:54:03 crc kubenswrapper[4872]: E0127 06:54:03.094356 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.096268 4872 trace.go:236] Trace[571671486]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 06:53:52.468) (total time: 10627ms): Jan 27 06:54:03 crc kubenswrapper[4872]: Trace[571671486]: ---"Objects listed" error: 10627ms (06:54:03.096) Jan 27 06:54:03 crc kubenswrapper[4872]: Trace[571671486]: [10.627415092s] [10.627415092s] END Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.096301 4872 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.097367 4872 trace.go:236] Trace[207106502]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 06:53:48.284) (total time: 14812ms): Jan 27 06:54:03 crc kubenswrapper[4872]: Trace[207106502]: ---"Objects listed" error: 14812ms (06:54:03.097) Jan 27 06:54:03 crc kubenswrapper[4872]: Trace[207106502]: [14.81271194s] [14.81271194s] END Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.097399 4872 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 06:54:03 crc kubenswrapper[4872]: E0127 06:54:03.097605 4872 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.098052 4872 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.098199 4872 trace.go:236] Trace[1694340501]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 06:53:50.142) (total time: 12955ms): Jan 27 06:54:03 crc kubenswrapper[4872]: Trace[1694340501]: ---"Objects listed" error: 12955ms (06:54:03.098) Jan 27 06:54:03 crc kubenswrapper[4872]: Trace[1694340501]: [12.955777517s] [12.955777517s] END Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.098219 4872 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.099462 4872 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.117485 4872 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.135329 4872 csr.go:261] certificate signing request csr-57m5h is approved, waiting to be issued Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.147909 4872 csr.go:257] certificate signing request csr-57m5h is issued Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.160830 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.179234 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:54:03 crc kubenswrapper[4872]: E0127 06:54:03.228586 4872 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.364335 4872 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58690->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.365983 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58690->192.168.126.11:17697: read: connection reset by peer" Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.366387 4872 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.366458 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 06:54:03 crc kubenswrapper[4872]: I0127 06:54:03.893904 4872 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 06:54:03 crc kubenswrapper[4872]: W0127 06:54:03.894222 4872 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 27 06:54:03 crc kubenswrapper[4872]: W0127 06:54:03.894220 4872 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 27 06:54:03 crc kubenswrapper[4872]: W0127 06:54:03.894251 4872 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 27 06:54:03 crc kubenswrapper[4872]: W0127 06:54:03.894312 4872 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 27 06:54:03 crc kubenswrapper[4872]: E0127 06:54:03.894351 4872 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.182:53014->38.102.83.182:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e83fc9efdc316 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 06:53:44.625611542 +0000 UTC m=+1.153086738,LastTimestamp:2026-01-27 06:53:44.625611542 +0000 UTC m=+1.153086738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.026643 4872 apiserver.go:52] "Watching apiserver" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.033443 4872 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.033880 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.034399 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.034411 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.034538 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.034650 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.034679 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.034736 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.035198 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.035405 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.035483 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.037280 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.037574 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.037613 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.037829 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.037931 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.038032 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.038199 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.038251 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.038635 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.044723 4872 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.057508 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:25:06.293741995 +0000 UTC Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.076220 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.095455 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104290 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104341 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104366 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104395 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104426 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104450 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104483 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104508 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104528 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104547 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104565 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104589 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104612 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104633 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104655 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104677 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104701 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104729 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104757 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104778 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104802 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104823 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104860 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104882 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104907 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104931 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104955 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104973 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.104997 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105025 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105047 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105069 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105093 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105117 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105156 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105178 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105211 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105233 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105251 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105276 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105300 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105322 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105340 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105365 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105390 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105411 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105438 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105461 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105481 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105509 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105535 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105559 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105582 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105605 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105625 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105646 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105667 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105693 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105718 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105741 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105765 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105792 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105810 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105833 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105875 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105899 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105921 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105943 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.105978 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106000 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106026 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106048 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106073 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106098 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106120 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106142 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106172 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106197 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106217 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106243 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106266 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106288 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106308 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106334 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106356 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106373 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106395 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106418 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106440 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106466 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106488 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106512 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106497 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106535 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106563 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106590 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106612 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106634 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106658 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106681 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106700 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106721 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106744 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106766 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106792 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106819 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106858 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106885 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106909 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106938 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106957 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.106979 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107003 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107025 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107027 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107051 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107142 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107149 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107172 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107354 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107393 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107427 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107460 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107487 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107492 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107532 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107561 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107592 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107621 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107647 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107805 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107825 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107874 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107899 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107928 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.107991 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108157 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108252 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108254 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108312 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108361 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108417 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108471 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108522 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108551 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108603 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108753 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108780 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108806 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108831 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108887 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108927 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108912 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108969 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.108956 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109155 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109195 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109220 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109336 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109380 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109405 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109514 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109543 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109563 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109587 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109629 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109653 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109678 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109714 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109735 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109754 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109776 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109810 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109942 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109964 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.109986 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110106 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110155 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110176 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110218 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110239 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110269 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110286 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110308 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110328 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110348 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110391 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110412 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110444 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110600 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110641 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110672 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110689 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110729 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110750 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110790 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.110808 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111089 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111124 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111168 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111231 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111253 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111284 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111320 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111351 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111460 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111481 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111541 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111648 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111709 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111741 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111770 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111796 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111929 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.111983 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.112049 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.112105 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.112145 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.112182 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.112228 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.112289 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.113098 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.114268 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.114340 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.114751 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.114933 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.115077 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.115100 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.115258 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.115332 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.115422 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.115376 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.115453 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.115713 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.115759 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.115821 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.116048 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.116389 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.116059 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.116070 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.116061 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.116123 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.116507 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.116592 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.116613 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.116697 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.116750 4872 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.116795 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.116912 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.116882 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:04.616834229 +0000 UTC m=+21.144309415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.116986 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.116987 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.117022 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.117045 4872 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.117058 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118318 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118369 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118394 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118413 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118447 4872 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118461 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118473 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118512 4872 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118541 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118551 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118565 4872 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118583 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118593 4872 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118798 4872 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118835 4872 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118880 4872 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118893 4872 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118904 4872 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118918 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118929 4872 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118939 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118967 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118998 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.119019 4872 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.119050 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.119068 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.119102 4872 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.117067 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.117104 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.119154 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.117311 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.117447 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.117518 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.117552 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.117686 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.117807 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.117833 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.119224 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.117905 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.119264 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118177 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118228 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118266 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118288 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118500 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.118979 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.115607 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.119164 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.119368 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.119460 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.119491 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.119612 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.119876 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.120337 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.120415 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.120662 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.120797 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.121698 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.122043 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.122485 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.123322 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.124907 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.124981 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.125237 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.125506 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.125498 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.125581 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.125790 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.126097 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.126213 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.126426 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.126613 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.126651 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.127979 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.128064 4872 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.128148 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:04.628122963 +0000 UTC m=+21.155598159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.128342 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.128593 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.128680 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.129394 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.129672 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.127437 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.130501 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.130289 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.130829 4872 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.137441 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.138556 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.138761 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.138977 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.139140 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.139330 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.139714 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.140039 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.137399 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.140497 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.140748 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.141041 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.141624 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.141567 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.141909 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.142049 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.142464 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.142183 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.142745 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.143015 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.143107 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.143710 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.144070 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.145148 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.146053 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.146914 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.147112 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.147335 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.147343 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.148013 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.148141 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.138364 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.148908 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.149155 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.147044 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.149218 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.139867 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.149426 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.149647 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.155185 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.155709 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.155880 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.156298 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.156563 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.157426 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.160618 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.154818 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.160973 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.161225 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.161764 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.157315 4872 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 06:49:03 +0000 UTC, rotation deadline is 2026-10-19 10:32:06.577720618 +0000 UTC Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.168926 4872 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6363h38m2.408802957s for next certificate rotation Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.162302 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.162385 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.162469 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:54:04.662438119 +0000 UTC m=+21.189913315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.162660 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.169082 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.162608 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.162788 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.169320 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.169337 4872 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.169428 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:04.669399094 +0000 UTC m=+21.196874290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.163404 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.163792 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.164256 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.169486 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.169316 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.167320 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.167507 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.167565 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.167924 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.167939 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.168174 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.168184 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.168394 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.168465 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.168763 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.167083 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.169682 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.169999 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.170222 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.170286 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.170439 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.170655 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.171084 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.171735 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.171799 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.171884 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.173483 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.174297 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.174610 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.174760 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.175124 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.175299 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.175520 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.175556 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.176672 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.176926 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.177498 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.177659 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.178125 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.178373 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.178646 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.178659 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.178710 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.178731 4872 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.178827 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:04.678795521 +0000 UTC m=+21.206270917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.179275 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.179578 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.179618 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.193306 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.196374 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.197732 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.206493 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.206759 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.208341 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.212958 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220024 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220119 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220248 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220260 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220271 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220279 4872 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220288 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220297 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220320 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220328 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220337 4872 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220346 4872 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220354 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220362 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220371 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220380 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220388 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220395 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220424 4872 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220434 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220443 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220453 4872 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220462 4872 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220470 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220479 4872 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220489 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220499 4872 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220508 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220516 4872 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220524 4872 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220533 4872 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220573 4872 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220581 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220597 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220610 4872 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220619 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220632 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220644 4872 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220657 4872 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220666 4872 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220675 4872 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220688 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220698 4872 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220716 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220736 4872 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220745 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220753 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220775 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220784 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220791 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220799 4872 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220808 4872 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220820 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220828 4872 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220837 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220873 4872 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220883 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220891 4872 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220901 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220911 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220931 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220939 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220947 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220956 4872 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220964 4872 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220984 4872 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.220993 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221025 4872 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221035 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221043 4872 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221052 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221060 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221068 4872 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221075 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221084 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221111 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221120 4872 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221128 4872 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221137 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221145 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221154 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221163 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221173 4872 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221185 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221197 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221214 4872 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221223 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221231 4872 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221239 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221252 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221261 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221271 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221279 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221287 4872 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221297 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221306 4872 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221327 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221350 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221359 4872 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221386 4872 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221394 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221403 4872 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.221411 4872 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222007 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222028 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222037 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222047 4872 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222058 4872 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222066 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222075 4872 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222084 4872 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222093 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222106 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222115 4872 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222125 4872 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222133 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222143 4872 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222155 4872 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222166 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222176 4872 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222186 4872 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222195 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222204 4872 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222213 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222222 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222230 4872 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222240 4872 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222248 4872 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222258 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222267 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222277 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222286 4872 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222294 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222303 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222311 4872 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222319 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222327 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222336 4872 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222345 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222354 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222363 4872 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222371 4872 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222380 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222388 4872 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222395 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222403 4872 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222412 4872 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222419 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222427 4872 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222436 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222444 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222452 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222460 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222468 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222475 4872 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222485 4872 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222504 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222513 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222523 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222531 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.222587 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.223010 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.234173 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.234184 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.240906 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.247479 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.248539 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.258069 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.258880 4872 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343" exitCode=255 Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.258872 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343"} Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.261419 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.273863 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.283973 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.301478 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.323618 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.323657 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.323667 4872 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.323676 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.323678 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.346129 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.352037 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.358386 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.365969 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:54:04 crc kubenswrapper[4872]: W0127 06:54:04.366257 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-1206b15d3e8f7968bc15b82738704c9255842542faa93705bd682fbc9568024a WatchSource:0}: Error finding container 1206b15d3e8f7968bc15b82738704c9255842542faa93705bd682fbc9568024a: Status 404 returned error can't find the container with id 1206b15d3e8f7968bc15b82738704c9255842542faa93705bd682fbc9568024a Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.374054 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: W0127 06:54:04.378573 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-904a01a4d4fc254e91fb0ba89f900f28e95d60da37ce23c19dc2e00182ecad87 WatchSource:0}: Error finding container 904a01a4d4fc254e91fb0ba89f900f28e95d60da37ce23c19dc2e00182ecad87: Status 404 returned error can't find the container with id 904a01a4d4fc254e91fb0ba89f900f28e95d60da37ce23c19dc2e00182ecad87 Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.387428 4872 scope.go:117] "RemoveContainer" containerID="3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.397293 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.412823 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.430181 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.463820 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.497221 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.515586 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.519468 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-nf5b8"] Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.524167 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nf5b8" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.527499 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.528731 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.528959 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.540386 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.567864 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.601168 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.627308 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.627353 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljtzw\" (UniqueName: \"kubernetes.io/projected/a379b846-ea80-4665-a69c-79b745d168ee-kube-api-access-ljtzw\") pod \"node-resolver-nf5b8\" (UID: \"a379b846-ea80-4665-a69c-79b745d168ee\") " pod="openshift-dns/node-resolver-nf5b8" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.627394 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a379b846-ea80-4665-a69c-79b745d168ee-hosts-file\") pod \"node-resolver-nf5b8\" (UID: \"a379b846-ea80-4665-a69c-79b745d168ee\") " pod="openshift-dns/node-resolver-nf5b8" Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.627503 4872 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.627588 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:05.627553222 +0000 UTC m=+22.155028418 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.632323 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.671475 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.689135 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.703490 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.716997 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.727987 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.728097 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.728127 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a379b846-ea80-4665-a69c-79b745d168ee-hosts-file\") pod \"node-resolver-nf5b8\" (UID: \"a379b846-ea80-4665-a69c-79b745d168ee\") " pod="openshift-dns/node-resolver-nf5b8" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.728220 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a379b846-ea80-4665-a69c-79b745d168ee-hosts-file\") pod \"node-resolver-nf5b8\" (UID: \"a379b846-ea80-4665-a69c-79b745d168ee\") " pod="openshift-dns/node-resolver-nf5b8" Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.728273 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:54:05.728216109 +0000 UTC m=+22.255691305 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.728339 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.728356 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.728379 4872 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.728433 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:05.728411524 +0000 UTC m=+22.255886720 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.728440 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljtzw\" (UniqueName: \"kubernetes.io/projected/a379b846-ea80-4665-a69c-79b745d168ee-kube-api-access-ljtzw\") pod \"node-resolver-nf5b8\" (UID: \"a379b846-ea80-4665-a69c-79b745d168ee\") " pod="openshift-dns/node-resolver-nf5b8" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.728480 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.728520 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.728635 4872 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.728691 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:05.72868295 +0000 UTC m=+22.256158146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.728821 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.728835 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.728868 4872 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:04 crc kubenswrapper[4872]: E0127 06:54:04.728894 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:05.728886645 +0000 UTC m=+22.256361841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.731993 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.747238 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljtzw\" (UniqueName: \"kubernetes.io/projected/a379b846-ea80-4665-a69c-79b745d168ee-kube-api-access-ljtzw\") pod \"node-resolver-nf5b8\" (UID: \"a379b846-ea80-4665-a69c-79b745d168ee\") " pod="openshift-dns/node-resolver-nf5b8" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.752745 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.759926 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.764123 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.787342 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.807298 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.827495 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.842327 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nf5b8" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.842529 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: W0127 06:54:04.853908 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda379b846_ea80_4665_a69c_79b745d168ee.slice/crio-7253b09329a7eb6749486f96eb65f0e9e91d99738dc85ee99482839d9b5bd112 WatchSource:0}: Error finding container 7253b09329a7eb6749486f96eb65f0e9e91d99738dc85ee99482839d9b5bd112: Status 404 returned error can't find the container with id 7253b09329a7eb6749486f96eb65f0e9e91d99738dc85ee99482839d9b5bd112 Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.860971 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.975087 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-nkvlp"] Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.975554 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.980677 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.981337 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.983627 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.983773 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 06:54:04 crc kubenswrapper[4872]: I0127 06:54:04.992335 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.004226 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.037297 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.046168 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.058622 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 20:35:03.015948131 +0000 UTC Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.059957 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.070247 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.080118 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.092608 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.104373 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.120212 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.129444 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.133676 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ea42312-a362-48cd-8387-34c060df18a1-proxy-tls\") pod \"machine-config-daemon-nkvlp\" (UID: \"5ea42312-a362-48cd-8387-34c060df18a1\") " pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.133715 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ea42312-a362-48cd-8387-34c060df18a1-mcd-auth-proxy-config\") pod \"machine-config-daemon-nkvlp\" (UID: \"5ea42312-a362-48cd-8387-34c060df18a1\") " pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.133736 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5ea42312-a362-48cd-8387-34c060df18a1-rootfs\") pod \"machine-config-daemon-nkvlp\" (UID: \"5ea42312-a362-48cd-8387-34c060df18a1\") " pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.133763 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tgsn\" (UniqueName: \"kubernetes.io/projected/5ea42312-a362-48cd-8387-34c060df18a1-kube-api-access-7tgsn\") pod \"machine-config-daemon-nkvlp\" (UID: \"5ea42312-a362-48cd-8387-34c060df18a1\") " pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.234543 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5ea42312-a362-48cd-8387-34c060df18a1-rootfs\") pod \"machine-config-daemon-nkvlp\" (UID: \"5ea42312-a362-48cd-8387-34c060df18a1\") " pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.234601 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tgsn\" (UniqueName: \"kubernetes.io/projected/5ea42312-a362-48cd-8387-34c060df18a1-kube-api-access-7tgsn\") pod \"machine-config-daemon-nkvlp\" (UID: \"5ea42312-a362-48cd-8387-34c060df18a1\") " pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.234648 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ea42312-a362-48cd-8387-34c060df18a1-proxy-tls\") pod \"machine-config-daemon-nkvlp\" (UID: \"5ea42312-a362-48cd-8387-34c060df18a1\") " pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.234669 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ea42312-a362-48cd-8387-34c060df18a1-mcd-auth-proxy-config\") pod \"machine-config-daemon-nkvlp\" (UID: \"5ea42312-a362-48cd-8387-34c060df18a1\") " pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.234708 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5ea42312-a362-48cd-8387-34c060df18a1-rootfs\") pod \"machine-config-daemon-nkvlp\" (UID: \"5ea42312-a362-48cd-8387-34c060df18a1\") " pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.235339 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ea42312-a362-48cd-8387-34c060df18a1-mcd-auth-proxy-config\") pod \"machine-config-daemon-nkvlp\" (UID: \"5ea42312-a362-48cd-8387-34c060df18a1\") " pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.237837 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ea42312-a362-48cd-8387-34c060df18a1-proxy-tls\") pod \"machine-config-daemon-nkvlp\" (UID: \"5ea42312-a362-48cd-8387-34c060df18a1\") " pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.258300 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tgsn\" (UniqueName: \"kubernetes.io/projected/5ea42312-a362-48cd-8387-34c060df18a1-kube-api-access-7tgsn\") pod \"machine-config-daemon-nkvlp\" (UID: \"5ea42312-a362-48cd-8387-34c060df18a1\") " pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.265796 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316"} Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.265899 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1206b15d3e8f7968bc15b82738704c9255842542faa93705bd682fbc9568024a"} Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.269356 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.271544 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c"} Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.271706 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.273097 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"904a01a4d4fc254e91fb0ba89f900f28e95d60da37ce23c19dc2e00182ecad87"} Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.274423 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nf5b8" event={"ID":"a379b846-ea80-4665-a69c-79b745d168ee","Type":"ContainerStarted","Data":"138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b"} Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.274455 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nf5b8" event={"ID":"a379b846-ea80-4665-a69c-79b745d168ee","Type":"ContainerStarted","Data":"7253b09329a7eb6749486f96eb65f0e9e91d99738dc85ee99482839d9b5bd112"} Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.276090 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff"} Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.276151 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd"} Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.276163 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"90e14d16ae8cbe8332719a86d05fd505eb100b5b4d1d20b19806d6e9604067ff"} Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.278961 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.286744 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.291808 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.300583 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:54:05 crc kubenswrapper[4872]: W0127 06:54:05.307342 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ea42312_a362_48cd_8387_34c060df18a1.slice/crio-6aad4c49ce095bf7dce0e1e98ad8946ffa556b8111f19222700db387106a5df1 WatchSource:0}: Error finding container 6aad4c49ce095bf7dce0e1e98ad8946ffa556b8111f19222700db387106a5df1: Status 404 returned error can't find the container with id 6aad4c49ce095bf7dce0e1e98ad8946ffa556b8111f19222700db387106a5df1 Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.318234 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.333510 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.346490 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.359991 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.367716 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-nvjgr"] Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.368096 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-tk2w6"] Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.368278 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.369168 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.370415 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ww8p7"] Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.371324 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.372434 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.372637 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.372816 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.373039 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.373198 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.374930 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.375273 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.375419 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.375625 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.375754 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.375927 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.377457 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.377768 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.377998 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.384676 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.407366 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.426994 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437224 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovnkube-config\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437273 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-multus-conf-dir\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437289 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f7097f1e-1b27-4ad4-a772-f62ec2fae899-system-cni-dir\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437339 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7097f1e-1b27-4ad4-a772-f62ec2fae899-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437356 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7097f1e-1b27-4ad4-a772-f62ec2fae899-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437390 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-multus-cni-dir\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437421 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-os-release\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437438 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-var-lib-cni-multus\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437452 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-var-lib-cni-bin\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437467 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-run-multus-certs\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437504 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-openvswitch\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437519 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-multus-socket-dir-parent\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437532 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-etc-kubernetes\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437547 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f7097f1e-1b27-4ad4-a772-f62ec2fae899-os-release\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437580 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f7097f1e-1b27-4ad4-a772-f62ec2fae899-cni-binary-copy\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437656 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-slash\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437672 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-var-lib-openvswitch\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437687 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-node-log\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437700 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-system-cni-dir\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437731 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-systemd-units\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437746 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-cni-netd\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437760 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.437901 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-run-ovn-kubernetes\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.438101 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-env-overrides\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.438181 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-cnibin\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.438234 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-hostroot\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.438272 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-multus-daemon-config\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.438301 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7mxq\" (UniqueName: \"kubernetes.io/projected/f7097f1e-1b27-4ad4-a772-f62ec2fae899-kube-api-access-s7mxq\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.438367 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-kubelet\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.438388 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-log-socket\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.438406 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-run-netns\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.438461 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-systemd\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.438479 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f7097f1e-1b27-4ad4-a772-f62ec2fae899-cnibin\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.438516 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovn-node-metrics-cert\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.438534 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-run-k8s-cni-cncf-io\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.438675 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-ovn\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.438704 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99glr\" (UniqueName: \"kubernetes.io/projected/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-kube-api-access-99glr\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.438719 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-cni-binary-copy\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.439303 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-run-netns\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.439337 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-etc-openvswitch\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.439352 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xnln\" (UniqueName: \"kubernetes.io/projected/b62e2eec-d750-4b03-90a4-4082a5d8ca18-kube-api-access-9xnln\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.439381 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovnkube-script-lib\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.439411 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-cni-bin\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.439446 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-var-lib-kubelet\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.441734 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.461174 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.491145 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.517413 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.530813 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540551 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-ovn\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540592 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99glr\" (UniqueName: \"kubernetes.io/projected/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-kube-api-access-99glr\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540612 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-run-netns\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540629 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-etc-openvswitch\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540646 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xnln\" (UniqueName: \"kubernetes.io/projected/b62e2eec-d750-4b03-90a4-4082a5d8ca18-kube-api-access-9xnln\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540663 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-cni-binary-copy\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540681 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovnkube-script-lib\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540696 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-cni-bin\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540713 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-var-lib-kubelet\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540733 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovnkube-config\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540749 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-multus-conf-dir\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540765 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f7097f1e-1b27-4ad4-a772-f62ec2fae899-system-cni-dir\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540782 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7097f1e-1b27-4ad4-a772-f62ec2fae899-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540759 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-ovn\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540803 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7097f1e-1b27-4ad4-a772-f62ec2fae899-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540905 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-cni-bin\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540949 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-multus-conf-dir\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540979 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-multus-cni-dir\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.540997 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-var-lib-kubelet\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541150 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-run-netns\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541183 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-os-release\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541207 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-var-lib-cni-multus\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541242 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-openvswitch\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541266 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-multus-socket-dir-parent\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541289 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-var-lib-cni-bin\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541313 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-run-multus-certs\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541338 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f7097f1e-1b27-4ad4-a772-f62ec2fae899-os-release\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541357 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-etc-kubernetes\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541389 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-slash\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541411 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-var-lib-openvswitch\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541431 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-node-log\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541451 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f7097f1e-1b27-4ad4-a772-f62ec2fae899-cni-binary-copy\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541482 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-systemd-units\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541504 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-cni-netd\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541525 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541549 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-system-cni-dir\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541557 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f7097f1e-1b27-4ad4-a772-f62ec2fae899-system-cni-dir\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541579 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-run-ovn-kubernetes\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541603 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-env-overrides\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541624 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-cnibin\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541606 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7097f1e-1b27-4ad4-a772-f62ec2fae899-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541646 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-etc-openvswitch\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541648 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-hostroot\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541681 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-hostroot\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541723 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541727 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-var-lib-openvswitch\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541753 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-node-log\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541786 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-system-cni-dir\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541816 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-run-ovn-kubernetes\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541817 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-systemd-units\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541917 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-os-release\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.541997 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-openvswitch\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542015 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-run-multus-certs\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542059 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-cni-netd\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542043 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-var-lib-cni-multus\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542110 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-cnibin\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542137 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-multus-daemon-config\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542143 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f7097f1e-1b27-4ad4-a772-f62ec2fae899-os-release\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542169 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-slash\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542184 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-etc-kubernetes\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542188 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-cni-binary-copy\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542247 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovnkube-script-lib\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542265 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-env-overrides\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542315 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-kubelet\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542353 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-log-socket\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542418 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-run-netns\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542428 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-multus-cni-dir\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542450 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7mxq\" (UniqueName: \"kubernetes.io/projected/f7097f1e-1b27-4ad4-a772-f62ec2fae899-kube-api-access-s7mxq\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542478 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-multus-socket-dir-parent\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542494 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-log-socket\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542495 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-var-lib-cni-bin\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542525 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-run-netns\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542635 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-systemd\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542667 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f7097f1e-1b27-4ad4-a772-f62ec2fae899-cnibin\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542637 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-kubelet\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542723 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovn-node-metrics-cert\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542709 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f7097f1e-1b27-4ad4-a772-f62ec2fae899-cnibin\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542951 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-systemd\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542978 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f7097f1e-1b27-4ad4-a772-f62ec2fae899-cni-binary-copy\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.542985 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-run-k8s-cni-cncf-io\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.543062 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-host-run-k8s-cni-cncf-io\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.543342 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-multus-daemon-config\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.543394 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovnkube-config\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.543827 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7097f1e-1b27-4ad4-a772-f62ec2fae899-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.547998 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovn-node-metrics-cert\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.557082 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.608226 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.633210 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.646388 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:05 crc kubenswrapper[4872]: E0127 06:54:05.646549 4872 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:54:05 crc kubenswrapper[4872]: E0127 06:54:05.646654 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:07.646633967 +0000 UTC m=+24.174109163 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.653487 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xnln\" (UniqueName: \"kubernetes.io/projected/b62e2eec-d750-4b03-90a4-4082a5d8ca18-kube-api-access-9xnln\") pod \"ovnkube-node-ww8p7\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.654350 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99glr\" (UniqueName: \"kubernetes.io/projected/8575a338-fc73-4413-ab05-0fdfdd6bdf2d-kube-api-access-99glr\") pod \"multus-nvjgr\" (UID: \"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\") " pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.655060 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7mxq\" (UniqueName: \"kubernetes.io/projected/f7097f1e-1b27-4ad4-a772-f62ec2fae899-kube-api-access-s7mxq\") pod \"multus-additional-cni-plugins-tk2w6\" (UID: \"f7097f1e-1b27-4ad4-a772-f62ec2fae899\") " pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.660210 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.683148 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.686736 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-nvjgr" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.698638 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.704745 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.712214 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.732555 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.747711 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:54:05 crc kubenswrapper[4872]: E0127 06:54:05.747876 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:54:07.747818637 +0000 UTC m=+24.275293833 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.748307 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.748427 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.748533 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:05 crc kubenswrapper[4872]: E0127 06:54:05.748565 4872 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:54:05 crc kubenswrapper[4872]: E0127 06:54:05.748757 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:07.748745401 +0000 UTC m=+24.276220597 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:54:05 crc kubenswrapper[4872]: E0127 06:54:05.748595 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:54:05 crc kubenswrapper[4872]: E0127 06:54:05.748967 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:54:05 crc kubenswrapper[4872]: E0127 06:54:05.749051 4872 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:05 crc kubenswrapper[4872]: E0127 06:54:05.748650 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:54:05 crc kubenswrapper[4872]: E0127 06:54:05.749156 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:54:05 crc kubenswrapper[4872]: E0127 06:54:05.749172 4872 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:05 crc kubenswrapper[4872]: E0127 06:54:05.749327 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:07.749136691 +0000 UTC m=+24.276611887 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:05 crc kubenswrapper[4872]: E0127 06:54:05.749416 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:07.749402657 +0000 UTC m=+24.276877853 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:05 crc kubenswrapper[4872]: W0127 06:54:05.765347 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7097f1e_1b27_4ad4_a772_f62ec2fae899.slice/crio-13d05a5cf9a5bfaf06740c13f50c08c1c19f0037bf0d23f58f16802c50d289e0 WatchSource:0}: Error finding container 13d05a5cf9a5bfaf06740c13f50c08c1c19f0037bf0d23f58f16802c50d289e0: Status 404 returned error can't find the container with id 13d05a5cf9a5bfaf06740c13f50c08c1c19f0037bf0d23f58f16802c50d289e0 Jan 27 06:54:05 crc kubenswrapper[4872]: I0127 06:54:05.770621 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.058949 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 09:00:30.089612692 +0000 UTC Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.097602 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.097667 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:06 crc kubenswrapper[4872]: E0127 06:54:06.097775 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.097864 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:06 crc kubenswrapper[4872]: E0127 06:54:06.097998 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:06 crc kubenswrapper[4872]: E0127 06:54:06.098079 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.101519 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.102313 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.103570 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.104441 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.105504 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.106009 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.106597 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.107765 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.108466 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.109391 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.109920 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.111294 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.111817 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.112589 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.113594 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.114165 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.115147 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.115557 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.116161 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.117137 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.117603 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.118697 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.119142 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.120369 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.120918 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.121709 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.125327 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.125825 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.127378 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.127994 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.128924 4872 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.129053 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.130761 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.131898 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.132392 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.134338 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.135183 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.136240 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.136986 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.138052 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.138914 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.139954 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.140636 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.141887 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.142802 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.144686 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.145693 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.146470 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.147036 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.147629 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.148128 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.148680 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.149295 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.149802 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.281221 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" event={"ID":"f7097f1e-1b27-4ad4-a772-f62ec2fae899","Type":"ContainerStarted","Data":"12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c"} Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.281301 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" event={"ID":"f7097f1e-1b27-4ad4-a772-f62ec2fae899","Type":"ContainerStarted","Data":"13d05a5cf9a5bfaf06740c13f50c08c1c19f0037bf0d23f58f16802c50d289e0"} Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.283480 4872 generic.go:334] "Generic (PLEG): container finished" podID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerID="a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c" exitCode=0 Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.283528 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerDied","Data":"a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c"} Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.283605 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerStarted","Data":"de5353229071b7ca67ee4936300b203fa68d3671141d3d704156082a85aab624"} Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.286785 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerStarted","Data":"968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f"} Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.286865 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerStarted","Data":"21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6"} Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.286880 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerStarted","Data":"6aad4c49ce095bf7dce0e1e98ad8946ffa556b8111f19222700db387106a5df1"} Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.288850 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nvjgr" event={"ID":"8575a338-fc73-4413-ab05-0fdfdd6bdf2d","Type":"ContainerStarted","Data":"573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28"} Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.288887 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nvjgr" event={"ID":"8575a338-fc73-4413-ab05-0fdfdd6bdf2d","Type":"ContainerStarted","Data":"7dca8a896227d89ef897c1a74ad279e0c169e178a6748c84488c97298c4e69dd"} Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.308475 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.335357 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.350329 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.375051 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.401255 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.424734 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.442991 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.464772 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.495972 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.516292 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.541719 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.545312 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-jfj5q"] Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.545719 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jfj5q" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.552977 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.553353 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.553594 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.557289 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.565980 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.584378 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.597272 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.628741 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.650614 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.659100 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/731616c6-fda8-4ce3-b678-42c61255141c-host\") pod \"node-ca-jfj5q\" (UID: \"731616c6-fda8-4ce3-b678-42c61255141c\") " pod="openshift-image-registry/node-ca-jfj5q" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.659173 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b24c\" (UniqueName: \"kubernetes.io/projected/731616c6-fda8-4ce3-b678-42c61255141c-kube-api-access-4b24c\") pod \"node-ca-jfj5q\" (UID: \"731616c6-fda8-4ce3-b678-42c61255141c\") " pod="openshift-image-registry/node-ca-jfj5q" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.659190 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/731616c6-fda8-4ce3-b678-42c61255141c-serviceca\") pod \"node-ca-jfj5q\" (UID: \"731616c6-fda8-4ce3-b678-42c61255141c\") " pod="openshift-image-registry/node-ca-jfj5q" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.665423 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.682343 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.698136 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.723048 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.744350 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.756910 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.759758 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b24c\" (UniqueName: \"kubernetes.io/projected/731616c6-fda8-4ce3-b678-42c61255141c-kube-api-access-4b24c\") pod \"node-ca-jfj5q\" (UID: \"731616c6-fda8-4ce3-b678-42c61255141c\") " pod="openshift-image-registry/node-ca-jfj5q" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.759938 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/731616c6-fda8-4ce3-b678-42c61255141c-serviceca\") pod \"node-ca-jfj5q\" (UID: \"731616c6-fda8-4ce3-b678-42c61255141c\") " pod="openshift-image-registry/node-ca-jfj5q" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.760101 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/731616c6-fda8-4ce3-b678-42c61255141c-host\") pod \"node-ca-jfj5q\" (UID: \"731616c6-fda8-4ce3-b678-42c61255141c\") " pod="openshift-image-registry/node-ca-jfj5q" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.760211 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/731616c6-fda8-4ce3-b678-42c61255141c-host\") pod \"node-ca-jfj5q\" (UID: \"731616c6-fda8-4ce3-b678-42c61255141c\") " pod="openshift-image-registry/node-ca-jfj5q" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.761148 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/731616c6-fda8-4ce3-b678-42c61255141c-serviceca\") pod \"node-ca-jfj5q\" (UID: \"731616c6-fda8-4ce3-b678-42c61255141c\") " pod="openshift-image-registry/node-ca-jfj5q" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.774536 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.780823 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b24c\" (UniqueName: \"kubernetes.io/projected/731616c6-fda8-4ce3-b678-42c61255141c-kube-api-access-4b24c\") pod \"node-ca-jfj5q\" (UID: \"731616c6-fda8-4ce3-b678-42c61255141c\") " pod="openshift-image-registry/node-ca-jfj5q" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.789041 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.809578 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.824029 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.839595 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:06 crc kubenswrapper[4872]: I0127 06:54:06.863386 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jfj5q" Jan 27 06:54:06 crc kubenswrapper[4872]: W0127 06:54:06.917815 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod731616c6_fda8_4ce3_b678_42c61255141c.slice/crio-c3a20618a09b2bf9257120ceaf19c6be586a68022c6fb8e376475fb7ced2cb33 WatchSource:0}: Error finding container c3a20618a09b2bf9257120ceaf19c6be586a68022c6fb8e376475fb7ced2cb33: Status 404 returned error can't find the container with id c3a20618a09b2bf9257120ceaf19c6be586a68022c6fb8e376475fb7ced2cb33 Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.059415 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:21:20.845529648 +0000 UTC Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.293618 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jfj5q" event={"ID":"731616c6-fda8-4ce3-b678-42c61255141c","Type":"ContainerStarted","Data":"c3a20618a09b2bf9257120ceaf19c6be586a68022c6fb8e376475fb7ced2cb33"} Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.296681 4872 generic.go:334] "Generic (PLEG): container finished" podID="f7097f1e-1b27-4ad4-a772-f62ec2fae899" containerID="12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c" exitCode=0 Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.296741 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" event={"ID":"f7097f1e-1b27-4ad4-a772-f62ec2fae899","Type":"ContainerDied","Data":"12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c"} Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.302692 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerStarted","Data":"943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1"} Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.302832 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerStarted","Data":"1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502"} Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.302925 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerStarted","Data":"283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564"} Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.303016 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerStarted","Data":"33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9"} Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.304518 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58"} Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.316061 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.338770 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.360747 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.379114 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.402136 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.420416 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.436097 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.450803 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.474738 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.489291 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.505215 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.527881 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.561104 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.577359 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.591112 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.605879 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.624716 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.655299 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.669270 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.672824 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:07 crc kubenswrapper[4872]: E0127 06:54:07.673161 4872 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:54:07 crc kubenswrapper[4872]: E0127 06:54:07.673400 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:11.67337686 +0000 UTC m=+28.200852066 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.689325 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.704172 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.722887 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.739057 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.752964 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.772546 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.774099 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.774299 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:07 crc kubenswrapper[4872]: E0127 06:54:07.774371 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:54:11.774323956 +0000 UTC m=+28.301799172 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.774460 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:07 crc kubenswrapper[4872]: E0127 06:54:07.774495 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:54:07 crc kubenswrapper[4872]: E0127 06:54:07.774523 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:54:07 crc kubenswrapper[4872]: E0127 06:54:07.774539 4872 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.774568 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:07 crc kubenswrapper[4872]: E0127 06:54:07.774616 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:11.774592232 +0000 UTC m=+28.302067638 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:07 crc kubenswrapper[4872]: E0127 06:54:07.774619 4872 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:54:07 crc kubenswrapper[4872]: E0127 06:54:07.774741 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:11.774693555 +0000 UTC m=+28.302168961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:54:07 crc kubenswrapper[4872]: E0127 06:54:07.774838 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:54:07 crc kubenswrapper[4872]: E0127 06:54:07.774917 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:54:07 crc kubenswrapper[4872]: E0127 06:54:07.774938 4872 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:07 crc kubenswrapper[4872]: E0127 06:54:07.775001 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:11.774983242 +0000 UTC m=+28.302458638 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.808503 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.842596 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:07 crc kubenswrapper[4872]: I0127 06:54:07.879287 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:07Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.060854 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:30:50.281592428 +0000 UTC Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.097929 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.098009 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:08 crc kubenswrapper[4872]: E0127 06:54:08.098068 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.098014 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:08 crc kubenswrapper[4872]: E0127 06:54:08.098184 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:08 crc kubenswrapper[4872]: E0127 06:54:08.098318 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.310962 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jfj5q" event={"ID":"731616c6-fda8-4ce3-b678-42c61255141c","Type":"ContainerStarted","Data":"90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4"} Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.315988 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerStarted","Data":"d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1"} Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.316060 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerStarted","Data":"fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf"} Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.318641 4872 generic.go:334] "Generic (PLEG): container finished" podID="f7097f1e-1b27-4ad4-a772-f62ec2fae899" containerID="c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a" exitCode=0 Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.318772 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" event={"ID":"f7097f1e-1b27-4ad4-a772-f62ec2fae899","Type":"ContainerDied","Data":"c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a"} Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.333956 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.360627 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.376000 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.393478 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.407650 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.423744 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.436890 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.447250 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.459219 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.474837 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.489382 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.506047 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.524623 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.535824 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.549472 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.569652 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.583392 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.597727 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.639120 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.680163 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.718135 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.762203 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.800299 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.843588 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.880749 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.919179 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.964562 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:08 crc kubenswrapper[4872]: I0127 06:54:08.999412 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.061483 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:33:21.757283419 +0000 UTC Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.324312 4872 generic.go:334] "Generic (PLEG): container finished" podID="f7097f1e-1b27-4ad4-a772-f62ec2fae899" containerID="52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749" exitCode=0 Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.324856 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" event={"ID":"f7097f1e-1b27-4ad4-a772-f62ec2fae899","Type":"ContainerDied","Data":"52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749"} Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.350433 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.369159 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.384835 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.398420 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.410243 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.425568 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.440943 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.453829 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.467395 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.488569 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.499106 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.501581 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.501632 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.501646 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.502664 4872 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.511505 4872 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.511782 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.511866 4872 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.513262 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.513316 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.513330 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.513351 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.513364 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:09Z","lastTransitionTime":"2026-01-27T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.526747 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: E0127 06:54:09.528243 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.532461 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.532504 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.532515 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.532531 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.532542 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:09Z","lastTransitionTime":"2026-01-27T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:09 crc kubenswrapper[4872]: E0127 06:54:09.545797 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.550744 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.550806 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.550823 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.550869 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.550887 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:09Z","lastTransitionTime":"2026-01-27T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:09 crc kubenswrapper[4872]: E0127 06:54:09.563030 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.566502 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.567118 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.567160 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.567172 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.567191 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.567204 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:09Z","lastTransitionTime":"2026-01-27T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:09 crc kubenswrapper[4872]: E0127 06:54:09.581073 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.584651 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.584685 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.584695 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.584717 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.584731 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:09Z","lastTransitionTime":"2026-01-27T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.597395 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: E0127 06:54:09.599610 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:09 crc kubenswrapper[4872]: E0127 06:54:09.599723 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.601326 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.601368 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.601379 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.601401 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.601412 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:09Z","lastTransitionTime":"2026-01-27T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.704122 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.704164 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.704173 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.704188 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.704199 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:09Z","lastTransitionTime":"2026-01-27T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.807140 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.807742 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.807757 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.807780 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.807792 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:09Z","lastTransitionTime":"2026-01-27T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.910734 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.910805 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.910821 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.910904 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:09 crc kubenswrapper[4872]: I0127 06:54:09.910932 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:09Z","lastTransitionTime":"2026-01-27T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.014308 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.014341 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.014350 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.014364 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.014372 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:10Z","lastTransitionTime":"2026-01-27T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.062964 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:20:28.091594349 +0000 UTC Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.097413 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.097545 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:10 crc kubenswrapper[4872]: E0127 06:54:10.097877 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.097648 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:10 crc kubenswrapper[4872]: E0127 06:54:10.098152 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:10 crc kubenswrapper[4872]: E0127 06:54:10.098042 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.117883 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.117971 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.117994 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.118031 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.118056 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:10Z","lastTransitionTime":"2026-01-27T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.220700 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.220747 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.220757 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.220773 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.220784 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:10Z","lastTransitionTime":"2026-01-27T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.249450 4872 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.324025 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.324093 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.324112 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.324147 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.324165 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:10Z","lastTransitionTime":"2026-01-27T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.334228 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerStarted","Data":"2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475"} Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.338094 4872 generic.go:334] "Generic (PLEG): container finished" podID="f7097f1e-1b27-4ad4-a772-f62ec2fae899" containerID="b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f" exitCode=0 Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.338174 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" event={"ID":"f7097f1e-1b27-4ad4-a772-f62ec2fae899","Type":"ContainerDied","Data":"b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f"} Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.361214 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.384300 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.404811 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.422237 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.431999 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.432038 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.432048 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.432063 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.432073 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:10Z","lastTransitionTime":"2026-01-27T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.438217 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.455118 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.471578 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.485307 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.499685 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.515588 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.534658 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.536018 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.536055 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.536065 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.536084 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.536095 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:10Z","lastTransitionTime":"2026-01-27T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.553335 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.570973 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.588697 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.639095 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.639135 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.639145 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.639161 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.639171 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:10Z","lastTransitionTime":"2026-01-27T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.742906 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.742980 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.742991 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.743029 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.743080 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:10Z","lastTransitionTime":"2026-01-27T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.847218 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.847269 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.847280 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.847301 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.847311 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:10Z","lastTransitionTime":"2026-01-27T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.950099 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.950132 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.950141 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.950153 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:10 crc kubenswrapper[4872]: I0127 06:54:10.950162 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:10Z","lastTransitionTime":"2026-01-27T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.052424 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.052472 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.052483 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.052503 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.052515 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:11Z","lastTransitionTime":"2026-01-27T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.063867 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 22:12:18.912249669 +0000 UTC Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.155682 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.155744 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.155762 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.155786 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.155802 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:11Z","lastTransitionTime":"2026-01-27T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.191735 4872 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.258793 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.258879 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.258894 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.258918 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.258933 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:11Z","lastTransitionTime":"2026-01-27T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.346367 4872 generic.go:334] "Generic (PLEG): container finished" podID="f7097f1e-1b27-4ad4-a772-f62ec2fae899" containerID="9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921" exitCode=0 Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.346436 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" event={"ID":"f7097f1e-1b27-4ad4-a772-f62ec2fae899","Type":"ContainerDied","Data":"9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921"} Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.362218 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.362268 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.362280 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.362300 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.362310 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:11Z","lastTransitionTime":"2026-01-27T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.365740 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.388179 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.406861 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.425560 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.458770 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.471869 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.472007 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.472074 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.472141 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.472204 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:11Z","lastTransitionTime":"2026-01-27T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.487990 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.510671 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.537692 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.554673 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.575171 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.579516 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.579542 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.579552 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.579569 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.579581 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:11Z","lastTransitionTime":"2026-01-27T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.594662 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.605814 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.623306 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.637163 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.683132 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.683179 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.683191 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.683213 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.683226 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:11Z","lastTransitionTime":"2026-01-27T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.718386 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:11 crc kubenswrapper[4872]: E0127 06:54:11.718553 4872 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:54:11 crc kubenswrapper[4872]: E0127 06:54:11.718620 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:19.71860545 +0000 UTC m=+36.246080646 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.786831 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.786927 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.786941 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.786977 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.786991 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:11Z","lastTransitionTime":"2026-01-27T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.819614 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.819789 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:11 crc kubenswrapper[4872]: E0127 06:54:11.819968 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:54:19.819906553 +0000 UTC m=+36.347381779 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:54:11 crc kubenswrapper[4872]: E0127 06:54:11.819998 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:54:11 crc kubenswrapper[4872]: E0127 06:54:11.820029 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:54:11 crc kubenswrapper[4872]: E0127 06:54:11.820051 4872 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.820097 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:11 crc kubenswrapper[4872]: E0127 06:54:11.820128 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:19.820102428 +0000 UTC m=+36.347577654 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.820171 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:11 crc kubenswrapper[4872]: E0127 06:54:11.820266 4872 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:54:11 crc kubenswrapper[4872]: E0127 06:54:11.820343 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:19.820325963 +0000 UTC m=+36.347801199 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:54:11 crc kubenswrapper[4872]: E0127 06:54:11.820349 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:54:11 crc kubenswrapper[4872]: E0127 06:54:11.820372 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:54:11 crc kubenswrapper[4872]: E0127 06:54:11.820389 4872 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:11 crc kubenswrapper[4872]: E0127 06:54:11.820436 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:19.820424496 +0000 UTC m=+36.347899892 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.890152 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.890217 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.890236 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.890263 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.890281 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:11Z","lastTransitionTime":"2026-01-27T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.993483 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.993545 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.993568 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.993604 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:11 crc kubenswrapper[4872]: I0127 06:54:11.993627 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:11Z","lastTransitionTime":"2026-01-27T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.065010 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:43:20.052755281 +0000 UTC Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.097263 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.097308 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.097320 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.097317 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.097344 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.097372 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:12Z","lastTransitionTime":"2026-01-27T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:12 crc kubenswrapper[4872]: E0127 06:54:12.097534 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.097614 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:12 crc kubenswrapper[4872]: E0127 06:54:12.097676 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.097829 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:12 crc kubenswrapper[4872]: E0127 06:54:12.097912 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.185027 4872 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.200080 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.200139 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.200151 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.200175 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.200188 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:12Z","lastTransitionTime":"2026-01-27T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.303156 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.303237 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.303263 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.303295 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.303322 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:12Z","lastTransitionTime":"2026-01-27T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.354832 4872 generic.go:334] "Generic (PLEG): container finished" podID="f7097f1e-1b27-4ad4-a772-f62ec2fae899" containerID="b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e" exitCode=0 Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.354970 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" event={"ID":"f7097f1e-1b27-4ad4-a772-f62ec2fae899","Type":"ContainerDied","Data":"b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e"} Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.365744 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerStarted","Data":"522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3"} Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.366549 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.366598 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.375969 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.400809 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.402238 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.402493 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.406026 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.406057 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.406070 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.406087 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.406100 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:12Z","lastTransitionTime":"2026-01-27T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.422746 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.443597 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.456826 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.475337 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.489906 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.506565 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.509022 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.509060 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.509076 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.509099 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.509114 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:12Z","lastTransitionTime":"2026-01-27T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.520943 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.536016 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.554159 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.573685 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.589055 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.601713 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.611523 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.611666 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.611750 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.611834 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.611930 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:12Z","lastTransitionTime":"2026-01-27T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.616575 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.629570 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.641961 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.656741 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.669297 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.681977 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.705352 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.714256 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.714315 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.714333 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.714358 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.714373 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:12Z","lastTransitionTime":"2026-01-27T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.717302 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.733874 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.749460 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.763139 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.780582 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.794952 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.809933 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.817257 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.817300 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.817311 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.817327 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.817339 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:12Z","lastTransitionTime":"2026-01-27T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.920658 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.920726 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.920741 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.920765 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:12 crc kubenswrapper[4872]: I0127 06:54:12.921008 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:12Z","lastTransitionTime":"2026-01-27T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.023455 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.023498 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.023509 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.023526 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.023537 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:13Z","lastTransitionTime":"2026-01-27T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.066170 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 19:52:42.795712449 +0000 UTC Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.127390 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.127457 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.127468 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.127486 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.127498 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:13Z","lastTransitionTime":"2026-01-27T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.229977 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.230026 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.230037 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.230057 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.230069 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:13Z","lastTransitionTime":"2026-01-27T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.332448 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.332722 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.332814 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.332931 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.333019 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:13Z","lastTransitionTime":"2026-01-27T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.373213 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.373996 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" event={"ID":"f7097f1e-1b27-4ad4-a772-f62ec2fae899","Type":"ContainerStarted","Data":"36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e"} Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.391670 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.404490 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.414681 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.428456 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.435293 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.435488 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.435552 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.435616 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.435712 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:13Z","lastTransitionTime":"2026-01-27T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.446925 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.461433 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.477902 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.492364 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.505302 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.518696 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.530029 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.539110 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.539154 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.539165 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.539185 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.539202 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:13Z","lastTransitionTime":"2026-01-27T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.547451 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.565512 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.581707 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.642443 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.642496 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.642510 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.642530 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.642543 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:13Z","lastTransitionTime":"2026-01-27T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.745365 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.745413 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.745425 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.745448 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.745460 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:13Z","lastTransitionTime":"2026-01-27T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.848780 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.849186 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.849327 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.849459 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.849689 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:13Z","lastTransitionTime":"2026-01-27T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.954278 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.954320 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.954333 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.954353 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:13 crc kubenswrapper[4872]: I0127 06:54:13.954367 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:13Z","lastTransitionTime":"2026-01-27T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.058039 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.058094 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.058108 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.058130 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.058145 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:14Z","lastTransitionTime":"2026-01-27T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.067218 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:26:02.559165134 +0000 UTC Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.098161 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.098236 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.098185 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:14 crc kubenswrapper[4872]: E0127 06:54:14.098375 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:14 crc kubenswrapper[4872]: E0127 06:54:14.098612 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:14 crc kubenswrapper[4872]: E0127 06:54:14.098757 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.118320 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.130964 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.146379 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.160917 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.160969 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.160981 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.161000 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.161012 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:14Z","lastTransitionTime":"2026-01-27T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.168341 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.179082 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.200858 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.222732 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.239363 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.256230 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.263864 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.263896 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.263908 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.263925 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.263936 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:14Z","lastTransitionTime":"2026-01-27T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.269833 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.282120 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.293997 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.308135 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.320749 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.366016 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.366056 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.366065 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.366078 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.366089 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:14Z","lastTransitionTime":"2026-01-27T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.376423 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.469359 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.469658 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.469722 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.469872 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.469976 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:14Z","lastTransitionTime":"2026-01-27T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.572598 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.572670 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.572695 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.572727 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.572780 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:14Z","lastTransitionTime":"2026-01-27T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.675306 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.675359 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.675373 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.675395 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.675410 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:14Z","lastTransitionTime":"2026-01-27T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.778392 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.778439 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.778452 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.778471 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.778486 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:14Z","lastTransitionTime":"2026-01-27T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.881745 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.881799 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.881809 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.881825 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.881836 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:14Z","lastTransitionTime":"2026-01-27T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.984534 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.984864 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.984971 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.985077 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:14 crc kubenswrapper[4872]: I0127 06:54:14.985171 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:14Z","lastTransitionTime":"2026-01-27T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.068406 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:10:49.13309179 +0000 UTC Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.087919 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.088222 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.088293 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.088355 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.088428 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:15Z","lastTransitionTime":"2026-01-27T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.190648 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.190692 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.190705 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.190725 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.190739 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:15Z","lastTransitionTime":"2026-01-27T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.293966 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.294048 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.294060 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.294077 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.294088 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:15Z","lastTransitionTime":"2026-01-27T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.382110 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovnkube-controller/0.log" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.385250 4872 generic.go:334] "Generic (PLEG): container finished" podID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerID="522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3" exitCode=1 Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.385297 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerDied","Data":"522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3"} Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.386150 4872 scope.go:117] "RemoveContainer" containerID="522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.400754 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.400800 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.400814 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.400833 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.400859 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:15Z","lastTransitionTime":"2026-01-27T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.405297 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.423147 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.434827 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.454796 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.473651 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.503196 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.503252 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.503269 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.503299 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.503317 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:15Z","lastTransitionTime":"2026-01-27T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.505033 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.538999 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:15Z\\\",\\\"message\\\":\\\"Sending *v1.Node event handler 2 for removal\\\\nI0127 06:54:15.213614 6067 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:15.213643 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:54:15.213654 6067 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:15.213684 6067 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 06:54:15.213692 6067 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 06:54:15.213704 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:54:15.213724 6067 factory.go:656] Stopping watch factory\\\\nI0127 06:54:15.213720 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:15.213736 6067 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:15.213744 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:15.213749 6067 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:15.214050 6067 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 06:54:15.214170 6067 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 06:54:15.214227 6067 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:54:15.214328 6067 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:54:15.214391 6067 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.551919 4872 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.577403 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.597188 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.605883 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.605949 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.605963 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.605987 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.606001 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:15Z","lastTransitionTime":"2026-01-27T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.614767 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.628728 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.646282 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.668027 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.682937 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.707922 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.707971 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.707983 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.708193 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.708204 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:15Z","lastTransitionTime":"2026-01-27T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.811099 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.811147 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.811171 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.811190 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.811202 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:15Z","lastTransitionTime":"2026-01-27T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.914254 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.914293 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.914305 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.914322 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:15 crc kubenswrapper[4872]: I0127 06:54:15.914334 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:15Z","lastTransitionTime":"2026-01-27T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.022524 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.022585 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.022596 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.022620 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.022638 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:16Z","lastTransitionTime":"2026-01-27T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.069409 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 00:19:53.10846818 +0000 UTC Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.097978 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.098064 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:16 crc kubenswrapper[4872]: E0127 06:54:16.098160 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:16 crc kubenswrapper[4872]: E0127 06:54:16.098326 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.097990 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:16 crc kubenswrapper[4872]: E0127 06:54:16.098445 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.125656 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.125691 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.125700 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.125714 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.125726 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:16Z","lastTransitionTime":"2026-01-27T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.229303 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.229365 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.229377 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.229399 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.229413 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:16Z","lastTransitionTime":"2026-01-27T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.332323 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.332636 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.332698 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.332771 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.332872 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:16Z","lastTransitionTime":"2026-01-27T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.391892 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovnkube-controller/0.log" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.394379 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerStarted","Data":"67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224"} Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.394538 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.408745 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.419731 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.432196 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.434993 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.435018 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.435027 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.435039 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.435068 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:16Z","lastTransitionTime":"2026-01-27T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.445668 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.461642 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.475302 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.488300 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.500572 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.510758 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.527233 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.536901 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.537106 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.537322 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.537469 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.537600 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:16Z","lastTransitionTime":"2026-01-27T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.552676 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:15Z\\\",\\\"message\\\":\\\"Sending *v1.Node event handler 2 for removal\\\\nI0127 06:54:15.213614 6067 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:15.213643 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:54:15.213654 6067 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:15.213684 6067 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 06:54:15.213692 6067 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 06:54:15.213704 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:54:15.213724 6067 factory.go:656] Stopping watch factory\\\\nI0127 06:54:15.213720 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:15.213736 6067 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:15.213744 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:15.213749 6067 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:15.214050 6067 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 06:54:15.214170 6067 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 06:54:15.214227 6067 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:54:15.214328 6067 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:54:15.214391 6067 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.563831 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.577163 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.587660 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.640496 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.640740 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.640894 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.640998 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.641076 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:16Z","lastTransitionTime":"2026-01-27T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.744293 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.744354 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.744371 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.744392 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.744406 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:16Z","lastTransitionTime":"2026-01-27T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.847542 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.847597 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.847621 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.847643 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.847658 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:16Z","lastTransitionTime":"2026-01-27T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.950535 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.950585 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.950596 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.950614 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:16 crc kubenswrapper[4872]: I0127 06:54:16.950625 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:16Z","lastTransitionTime":"2026-01-27T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.053537 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.053890 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.054013 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.054261 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.054450 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:17Z","lastTransitionTime":"2026-01-27T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.069618 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:01:43.665979859 +0000 UTC Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.157941 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.157996 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.158012 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.158031 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.158047 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:17Z","lastTransitionTime":"2026-01-27T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.261126 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.261187 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.261196 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.261213 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.261224 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:17Z","lastTransitionTime":"2026-01-27T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.363013 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.363081 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.363092 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.363110 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.363121 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:17Z","lastTransitionTime":"2026-01-27T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.401074 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovnkube-controller/1.log" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.401776 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovnkube-controller/0.log" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.405757 4872 generic.go:334] "Generic (PLEG): container finished" podID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerID="67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224" exitCode=1 Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.405820 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerDied","Data":"67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224"} Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.405921 4872 scope.go:117] "RemoveContainer" containerID="522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.409310 4872 scope.go:117] "RemoveContainer" containerID="67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224" Jan 27 06:54:17 crc kubenswrapper[4872]: E0127 06:54:17.409969 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.432232 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.453981 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.467599 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.467994 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.468077 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.468155 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.468235 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:17Z","lastTransitionTime":"2026-01-27T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.471021 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.485947 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.500326 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.523824 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:15Z\\\",\\\"message\\\":\\\"Sending *v1.Node event handler 2 for removal\\\\nI0127 06:54:15.213614 6067 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:15.213643 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:54:15.213654 6067 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:15.213684 6067 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 06:54:15.213692 6067 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 06:54:15.213704 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:54:15.213724 6067 factory.go:656] Stopping watch factory\\\\nI0127 06:54:15.213720 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:15.213736 6067 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:15.213744 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:15.213749 6067 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:15.214050 6067 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 06:54:15.214170 6067 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 06:54:15.214227 6067 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:54:15.214328 6067 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:54:15.214391 6067 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:16Z\\\",\\\"message\\\":\\\"event handler 2 for removal\\\\nI0127 06:54:16.365988 6195 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 06:54:16.366004 6195 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:54:16.366020 6195 factory.go:656] Stopping watch factory\\\\nI0127 06:54:16.366032 6195 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:54:16.366041 6195 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:16.366047 6195 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:16.366053 6195 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:16.366059 6195 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:16.366066 6195 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:16.366072 6195 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:16.366264 6195 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:54:16.366460 6195 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:54:16.366653 6195 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.537683 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.551446 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn"] Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.552082 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.555144 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.556501 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.557237 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.571559 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.571676 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.571696 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.571724 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.571747 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:17Z","lastTransitionTime":"2026-01-27T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.573742 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.590189 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-544tw\" (UniqueName: \"kubernetes.io/projected/89bc130f-996f-40e6-9015-b2023a608044-kube-api-access-544tw\") pod \"ovnkube-control-plane-749d76644c-whdsn\" (UID: \"89bc130f-996f-40e6-9015-b2023a608044\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.590231 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/89bc130f-996f-40e6-9015-b2023a608044-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-whdsn\" (UID: \"89bc130f-996f-40e6-9015-b2023a608044\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.590250 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/89bc130f-996f-40e6-9015-b2023a608044-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-whdsn\" (UID: \"89bc130f-996f-40e6-9015-b2023a608044\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.590282 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/89bc130f-996f-40e6-9015-b2023a608044-env-overrides\") pod \"ovnkube-control-plane-749d76644c-whdsn\" (UID: \"89bc130f-996f-40e6-9015-b2023a608044\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.596249 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.609813 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.626175 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.645073 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.664073 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.674391 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.674437 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.674450 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.674468 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.674480 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:17Z","lastTransitionTime":"2026-01-27T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.679534 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.691913 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-544tw\" (UniqueName: \"kubernetes.io/projected/89bc130f-996f-40e6-9015-b2023a608044-kube-api-access-544tw\") pod \"ovnkube-control-plane-749d76644c-whdsn\" (UID: \"89bc130f-996f-40e6-9015-b2023a608044\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.691962 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/89bc130f-996f-40e6-9015-b2023a608044-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-whdsn\" (UID: \"89bc130f-996f-40e6-9015-b2023a608044\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.691987 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/89bc130f-996f-40e6-9015-b2023a608044-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-whdsn\" (UID: \"89bc130f-996f-40e6-9015-b2023a608044\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.692024 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/89bc130f-996f-40e6-9015-b2023a608044-env-overrides\") pod \"ovnkube-control-plane-749d76644c-whdsn\" (UID: \"89bc130f-996f-40e6-9015-b2023a608044\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.692978 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/89bc130f-996f-40e6-9015-b2023a608044-env-overrides\") pod \"ovnkube-control-plane-749d76644c-whdsn\" (UID: \"89bc130f-996f-40e6-9015-b2023a608044\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.694294 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/89bc130f-996f-40e6-9015-b2023a608044-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-whdsn\" (UID: \"89bc130f-996f-40e6-9015-b2023a608044\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.695174 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.703498 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/89bc130f-996f-40e6-9015-b2023a608044-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-whdsn\" (UID: \"89bc130f-996f-40e6-9015-b2023a608044\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.707998 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.708754 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-544tw\" (UniqueName: \"kubernetes.io/projected/89bc130f-996f-40e6-9015-b2023a608044-kube-api-access-544tw\") pod \"ovnkube-control-plane-749d76644c-whdsn\" (UID: \"89bc130f-996f-40e6-9015-b2023a608044\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.722135 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.740009 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.755244 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.769823 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.776996 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.777052 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.777070 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.777090 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.777103 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:17Z","lastTransitionTime":"2026-01-27T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.781281 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.793631 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.806588 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.818593 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.836747 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:15Z\\\",\\\"message\\\":\\\"Sending *v1.Node event handler 2 for removal\\\\nI0127 06:54:15.213614 6067 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:15.213643 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:54:15.213654 6067 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:15.213684 6067 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 06:54:15.213692 6067 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 06:54:15.213704 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:54:15.213724 6067 factory.go:656] Stopping watch factory\\\\nI0127 06:54:15.213720 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:15.213736 6067 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:15.213744 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:15.213749 6067 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:15.214050 6067 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 06:54:15.214170 6067 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 06:54:15.214227 6067 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:54:15.214328 6067 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:54:15.214391 6067 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:16Z\\\",\\\"message\\\":\\\"event handler 2 for removal\\\\nI0127 06:54:16.365988 6195 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 06:54:16.366004 6195 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:54:16.366020 6195 factory.go:656] Stopping watch factory\\\\nI0127 06:54:16.366032 6195 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:54:16.366041 6195 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:16.366047 6195 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:16.366053 6195 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:16.366059 6195 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:16.366066 6195 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:16.366072 6195 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:16.366264 6195 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:54:16.366460 6195 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:54:16.366653 6195 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.847920 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.862132 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.868431 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.874770 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.882742 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.882783 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.882794 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.882813 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.882826 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:17Z","lastTransitionTime":"2026-01-27T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:17 crc kubenswrapper[4872]: W0127 06:54:17.883819 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89bc130f_996f_40e6_9015_b2023a608044.slice/crio-10642d46e8fe4e6dde4dfa726a3b70bbb620f7d355c6786e8c23c8e4a537e203 WatchSource:0}: Error finding container 10642d46e8fe4e6dde4dfa726a3b70bbb620f7d355c6786e8c23c8e4a537e203: Status 404 returned error can't find the container with id 10642d46e8fe4e6dde4dfa726a3b70bbb620f7d355c6786e8c23c8e4a537e203 Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.987417 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.987476 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.987490 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.987511 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:17 crc kubenswrapper[4872]: I0127 06:54:17.987524 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:17Z","lastTransitionTime":"2026-01-27T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.069785 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 05:32:03.972380921 +0000 UTC Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.092414 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.092460 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.092479 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.092496 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.092508 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:18Z","lastTransitionTime":"2026-01-27T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.097834 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.097834 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:18 crc kubenswrapper[4872]: E0127 06:54:18.097968 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.097980 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:18 crc kubenswrapper[4872]: E0127 06:54:18.098150 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:18 crc kubenswrapper[4872]: E0127 06:54:18.098740 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.195398 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.195447 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.195456 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.195472 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.195482 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:18Z","lastTransitionTime":"2026-01-27T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.298989 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.299052 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.299066 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.299091 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.299106 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:18Z","lastTransitionTime":"2026-01-27T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.402683 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.402731 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.402743 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.402760 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.402772 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:18Z","lastTransitionTime":"2026-01-27T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.410004 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" event={"ID":"89bc130f-996f-40e6-9015-b2023a608044","Type":"ContainerStarted","Data":"9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c"} Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.410042 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" event={"ID":"89bc130f-996f-40e6-9015-b2023a608044","Type":"ContainerStarted","Data":"80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22"} Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.410054 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" event={"ID":"89bc130f-996f-40e6-9015-b2023a608044","Type":"ContainerStarted","Data":"10642d46e8fe4e6dde4dfa726a3b70bbb620f7d355c6786e8c23c8e4a537e203"} Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.411981 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovnkube-controller/1.log" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.440214 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.453007 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.465507 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.478684 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.491215 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.506038 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.506806 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.506859 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.506875 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.506896 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.506907 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:18Z","lastTransitionTime":"2026-01-27T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.525353 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:15Z\\\",\\\"message\\\":\\\"Sending *v1.Node event handler 2 for removal\\\\nI0127 06:54:15.213614 6067 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:15.213643 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:54:15.213654 6067 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:15.213684 6067 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 06:54:15.213692 6067 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 06:54:15.213704 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:54:15.213724 6067 factory.go:656] Stopping watch factory\\\\nI0127 06:54:15.213720 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:15.213736 6067 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:15.213744 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:15.213749 6067 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:15.214050 6067 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 06:54:15.214170 6067 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 06:54:15.214227 6067 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:54:15.214328 6067 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:54:15.214391 6067 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:16Z\\\",\\\"message\\\":\\\"event handler 2 for removal\\\\nI0127 06:54:16.365988 6195 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 06:54:16.366004 6195 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:54:16.366020 6195 factory.go:656] Stopping watch factory\\\\nI0127 06:54:16.366032 6195 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:54:16.366041 6195 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:16.366047 6195 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:16.366053 6195 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:16.366059 6195 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:16.366066 6195 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:16.366072 6195 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:16.366264 6195 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:54:16.366460 6195 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:54:16.366653 6195 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.538460 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.550554 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.565939 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.580182 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.593392 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.604702 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.609718 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.609761 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.609774 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.609793 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.609806 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:18Z","lastTransitionTime":"2026-01-27T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.621086 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.636946 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.672866 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-nstjz"] Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.673443 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:18 crc kubenswrapper[4872]: E0127 06:54:18.673510 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.697095 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.703616 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs\") pod \"network-metrics-daemon-nstjz\" (UID: \"f22e033f-46c7-4d57-a333-e1eee5cd3091\") " pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.703799 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvndr\" (UniqueName: \"kubernetes.io/projected/f22e033f-46c7-4d57-a333-e1eee5cd3091-kube-api-access-zvndr\") pod \"network-metrics-daemon-nstjz\" (UID: \"f22e033f-46c7-4d57-a333-e1eee5cd3091\") " pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.712621 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.712659 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.712669 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.712683 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.712693 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:18Z","lastTransitionTime":"2026-01-27T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.723672 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:15Z\\\",\\\"message\\\":\\\"Sending *v1.Node event handler 2 for removal\\\\nI0127 06:54:15.213614 6067 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:15.213643 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:54:15.213654 6067 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:15.213684 6067 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 06:54:15.213692 6067 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 06:54:15.213704 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:54:15.213724 6067 factory.go:656] Stopping watch factory\\\\nI0127 06:54:15.213720 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:15.213736 6067 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:15.213744 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:15.213749 6067 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:15.214050 6067 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 06:54:15.214170 6067 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 06:54:15.214227 6067 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:54:15.214328 6067 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:54:15.214391 6067 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:16Z\\\",\\\"message\\\":\\\"event handler 2 for removal\\\\nI0127 06:54:16.365988 6195 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 06:54:16.366004 6195 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:54:16.366020 6195 factory.go:656] Stopping watch factory\\\\nI0127 06:54:16.366032 6195 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:54:16.366041 6195 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:16.366047 6195 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:16.366053 6195 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:16.366059 6195 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:16.366066 6195 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:16.366072 6195 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:16.366264 6195 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:54:16.366460 6195 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:54:16.366653 6195 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.738280 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.756196 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.772626 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.787159 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.801327 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.804529 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs\") pod \"network-metrics-daemon-nstjz\" (UID: \"f22e033f-46c7-4d57-a333-e1eee5cd3091\") " pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.804607 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvndr\" (UniqueName: \"kubernetes.io/projected/f22e033f-46c7-4d57-a333-e1eee5cd3091-kube-api-access-zvndr\") pod \"network-metrics-daemon-nstjz\" (UID: \"f22e033f-46c7-4d57-a333-e1eee5cd3091\") " pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:18 crc kubenswrapper[4872]: E0127 06:54:18.804965 4872 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:54:18 crc kubenswrapper[4872]: E0127 06:54:18.805023 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs podName:f22e033f-46c7-4d57-a333-e1eee5cd3091 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:19.305008481 +0000 UTC m=+35.832483677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs") pod "network-metrics-daemon-nstjz" (UID: "f22e033f-46c7-4d57-a333-e1eee5cd3091") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.816034 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.816082 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.816095 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.816116 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.816131 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:18Z","lastTransitionTime":"2026-01-27T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.818757 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.829892 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvndr\" (UniqueName: \"kubernetes.io/projected/f22e033f-46c7-4d57-a333-e1eee5cd3091-kube-api-access-zvndr\") pod \"network-metrics-daemon-nstjz\" (UID: \"f22e033f-46c7-4d57-a333-e1eee5cd3091\") " pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.833411 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.848005 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.864355 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.882335 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.898098 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.912790 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.919060 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.919093 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.919103 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.919117 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.919127 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:18Z","lastTransitionTime":"2026-01-27T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.926697 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.938455 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.950780 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.964547 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.979937 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:18 crc kubenswrapper[4872]: I0127 06:54:18.994054 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.009832 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.021493 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.021572 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.021588 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.021609 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.021623 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:19Z","lastTransitionTime":"2026-01-27T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.025494 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.041927 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.055081 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.070753 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:37:56.398721209 +0000 UTC Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.071586 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.084946 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.095098 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.108459 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.122695 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.123673 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.123723 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.123742 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.123763 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.123777 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:19Z","lastTransitionTime":"2026-01-27T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.136892 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.157904 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:15Z\\\",\\\"message\\\":\\\"Sending *v1.Node event handler 2 for removal\\\\nI0127 06:54:15.213614 6067 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:15.213643 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:54:15.213654 6067 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:15.213684 6067 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 06:54:15.213692 6067 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 06:54:15.213704 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:54:15.213724 6067 factory.go:656] Stopping watch factory\\\\nI0127 06:54:15.213720 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:15.213736 6067 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:15.213744 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:15.213749 6067 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:15.214050 6067 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 06:54:15.214170 6067 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 06:54:15.214227 6067 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:54:15.214328 6067 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:54:15.214391 6067 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:16Z\\\",\\\"message\\\":\\\"event handler 2 for removal\\\\nI0127 06:54:16.365988 6195 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 06:54:16.366004 6195 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:54:16.366020 6195 factory.go:656] Stopping watch factory\\\\nI0127 06:54:16.366032 6195 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:54:16.366041 6195 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:16.366047 6195 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:16.366053 6195 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:16.366059 6195 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:16.366066 6195 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:16.366072 6195 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:16.366264 6195 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:54:16.366460 6195 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:54:16.366653 6195 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.171034 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.184358 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.227257 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.227305 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.227315 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.227332 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.227344 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:19Z","lastTransitionTime":"2026-01-27T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.309583 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs\") pod \"network-metrics-daemon-nstjz\" (UID: \"f22e033f-46c7-4d57-a333-e1eee5cd3091\") " pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.309883 4872 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.309967 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs podName:f22e033f-46c7-4d57-a333-e1eee5cd3091 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:20.309950128 +0000 UTC m=+36.837425324 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs") pod "network-metrics-daemon-nstjz" (UID: "f22e033f-46c7-4d57-a333-e1eee5cd3091") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.331165 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.331224 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.331239 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.331258 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.331272 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:19Z","lastTransitionTime":"2026-01-27T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.433434 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.433488 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.433501 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.433522 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.433538 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:19Z","lastTransitionTime":"2026-01-27T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.536374 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.536427 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.536438 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.536459 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.536471 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:19Z","lastTransitionTime":"2026-01-27T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.639798 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.639881 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.639899 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.639917 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.639927 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:19Z","lastTransitionTime":"2026-01-27T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.743282 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.743327 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.743340 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.743359 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.743372 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:19Z","lastTransitionTime":"2026-01-27T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.816366 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.816523 4872 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.816684 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:35.81666293 +0000 UTC m=+52.344138126 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.845970 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.846306 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.846395 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.846473 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.846536 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:19Z","lastTransitionTime":"2026-01-27T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.909591 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.909635 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.909644 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.909662 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.909671 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:19Z","lastTransitionTime":"2026-01-27T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.917779 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:54:35.917743128 +0000 UTC m=+52.445218314 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.917647 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.917988 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.918153 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.918171 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.918182 4872 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.918696 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.918803 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.918818 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.918827 4872 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.918870 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:35.918862246 +0000 UTC m=+52.446337442 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.918914 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.918958 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:35.918936838 +0000 UTC m=+52.446412034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.918981 4872 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.919030 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:35.91902175 +0000 UTC m=+52.446496946 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.923356 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.927589 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.927623 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.927631 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.927647 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.927665 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:19Z","lastTransitionTime":"2026-01-27T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.939392 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.943267 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.943289 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.943298 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.943313 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.943323 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:19Z","lastTransitionTime":"2026-01-27T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.955367 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.959166 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.959195 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.959206 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.959224 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.959236 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:19Z","lastTransitionTime":"2026-01-27T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.971563 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.975365 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.975403 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.975415 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.975435 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.975448 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:19Z","lastTransitionTime":"2026-01-27T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.987384 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:19 crc kubenswrapper[4872]: E0127 06:54:19.987536 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.989418 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.989474 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.989487 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.989507 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:19 crc kubenswrapper[4872]: I0127 06:54:19.989519 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:19Z","lastTransitionTime":"2026-01-27T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.071338 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 04:32:17.921566978 +0000 UTC Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.092461 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.092734 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.092745 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.092761 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.092772 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:20Z","lastTransitionTime":"2026-01-27T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.100057 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:20 crc kubenswrapper[4872]: E0127 06:54:20.100734 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.100180 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:20 crc kubenswrapper[4872]: E0127 06:54:20.100814 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.100124 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:20 crc kubenswrapper[4872]: E0127 06:54:20.100885 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.196161 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.196459 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.196561 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.196657 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.196747 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:20Z","lastTransitionTime":"2026-01-27T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.299824 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.300158 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.300246 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.300336 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.300418 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:20Z","lastTransitionTime":"2026-01-27T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.323616 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs\") pod \"network-metrics-daemon-nstjz\" (UID: \"f22e033f-46c7-4d57-a333-e1eee5cd3091\") " pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:20 crc kubenswrapper[4872]: E0127 06:54:20.323905 4872 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:54:20 crc kubenswrapper[4872]: E0127 06:54:20.324034 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs podName:f22e033f-46c7-4d57-a333-e1eee5cd3091 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:22.324005058 +0000 UTC m=+38.851480424 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs") pod "network-metrics-daemon-nstjz" (UID: "f22e033f-46c7-4d57-a333-e1eee5cd3091") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.403767 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.404186 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.404340 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.404428 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.404509 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:20Z","lastTransitionTime":"2026-01-27T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.508350 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.508419 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.508431 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.508448 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.508460 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:20Z","lastTransitionTime":"2026-01-27T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.611545 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.611906 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.612013 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.612107 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.612293 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:20Z","lastTransitionTime":"2026-01-27T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.715765 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.715812 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.715822 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.715861 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.715876 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:20Z","lastTransitionTime":"2026-01-27T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.818590 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.818649 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.818662 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.818682 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.818697 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:20Z","lastTransitionTime":"2026-01-27T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.921903 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.921975 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.921990 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.922009 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:20 crc kubenswrapper[4872]: I0127 06:54:20.922047 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:20Z","lastTransitionTime":"2026-01-27T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.025560 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.025882 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.026029 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.026144 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.026427 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:21Z","lastTransitionTime":"2026-01-27T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.071726 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:26:56.83362467 +0000 UTC Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.097324 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:21 crc kubenswrapper[4872]: E0127 06:54:21.097751 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.129607 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.129901 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.130140 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.130362 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.130561 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:21Z","lastTransitionTime":"2026-01-27T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.233381 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.233429 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.233439 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.233457 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.233470 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:21Z","lastTransitionTime":"2026-01-27T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.336357 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.336621 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.336779 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.336951 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.337101 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:21Z","lastTransitionTime":"2026-01-27T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.439730 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.439813 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.439831 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.439874 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.439890 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:21Z","lastTransitionTime":"2026-01-27T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.542964 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.543634 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.543745 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.543829 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.543975 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:21Z","lastTransitionTime":"2026-01-27T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.647137 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.647171 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.647200 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.647219 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.647230 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:21Z","lastTransitionTime":"2026-01-27T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.750372 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.750425 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.750435 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.750455 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.750477 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:21Z","lastTransitionTime":"2026-01-27T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.852891 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.852929 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.852940 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.852960 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.852970 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:21Z","lastTransitionTime":"2026-01-27T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.955587 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.955872 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.955952 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.956021 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:21 crc kubenswrapper[4872]: I0127 06:54:21.956133 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:21Z","lastTransitionTime":"2026-01-27T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.059686 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.059719 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.059727 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.059744 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.059754 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:22Z","lastTransitionTime":"2026-01-27T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.072609 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 07:23:37.559068623 +0000 UTC Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.101576 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.101569 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:22 crc kubenswrapper[4872]: E0127 06:54:22.101757 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.101589 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:22 crc kubenswrapper[4872]: E0127 06:54:22.101996 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:22 crc kubenswrapper[4872]: E0127 06:54:22.102110 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.162260 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.162559 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.162620 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.162683 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.162761 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:22Z","lastTransitionTime":"2026-01-27T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.266054 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.266100 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.266112 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.266136 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.266153 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:22Z","lastTransitionTime":"2026-01-27T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.346856 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs\") pod \"network-metrics-daemon-nstjz\" (UID: \"f22e033f-46c7-4d57-a333-e1eee5cd3091\") " pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:22 crc kubenswrapper[4872]: E0127 06:54:22.347081 4872 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:54:22 crc kubenswrapper[4872]: E0127 06:54:22.347486 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs podName:f22e033f-46c7-4d57-a333-e1eee5cd3091 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:26.347462729 +0000 UTC m=+42.874937925 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs") pod "network-metrics-daemon-nstjz" (UID: "f22e033f-46c7-4d57-a333-e1eee5cd3091") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.368954 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.369182 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.369274 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.369395 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.369459 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:22Z","lastTransitionTime":"2026-01-27T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.472508 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.472547 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.472556 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.472574 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.472585 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:22Z","lastTransitionTime":"2026-01-27T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.576501 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.577349 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.577390 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.577417 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.577442 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:22Z","lastTransitionTime":"2026-01-27T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.681581 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.681648 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.681670 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.681713 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.681733 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:22Z","lastTransitionTime":"2026-01-27T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.785189 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.785244 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.785258 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.785317 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.785332 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:22Z","lastTransitionTime":"2026-01-27T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.887904 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.887948 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.887962 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.887982 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.887994 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:22Z","lastTransitionTime":"2026-01-27T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.991557 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.991616 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.991631 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.991653 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:22 crc kubenswrapper[4872]: I0127 06:54:22.991669 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:22Z","lastTransitionTime":"2026-01-27T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.073782 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:59:45.211551743 +0000 UTC Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.095358 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.095630 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.095707 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.095797 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.095893 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:23Z","lastTransitionTime":"2026-01-27T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.097387 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:23 crc kubenswrapper[4872]: E0127 06:54:23.097563 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.198839 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.198937 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.198951 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.198971 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.198983 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:23Z","lastTransitionTime":"2026-01-27T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.301276 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.301342 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.301352 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.301369 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.301381 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:23Z","lastTransitionTime":"2026-01-27T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.404440 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.404481 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.404491 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.404508 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.404519 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:23Z","lastTransitionTime":"2026-01-27T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.507198 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.507251 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.507264 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.507282 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.507294 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:23Z","lastTransitionTime":"2026-01-27T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.609671 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.609719 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.609728 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.609747 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.609760 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:23Z","lastTransitionTime":"2026-01-27T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.712407 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.712461 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.712473 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.712495 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.712510 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:23Z","lastTransitionTime":"2026-01-27T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.815027 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.815083 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.815096 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.815116 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.815128 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:23Z","lastTransitionTime":"2026-01-27T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.917544 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.917592 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.917603 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.917623 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:23 crc kubenswrapper[4872]: I0127 06:54:23.917637 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:23Z","lastTransitionTime":"2026-01-27T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.020266 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.020307 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.020316 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.020541 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.020558 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:24Z","lastTransitionTime":"2026-01-27T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.074152 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:59:58.327940965 +0000 UTC Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.097750 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:24 crc kubenswrapper[4872]: E0127 06:54:24.097935 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.098005 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:24 crc kubenswrapper[4872]: E0127 06:54:24.098083 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.098276 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:24 crc kubenswrapper[4872]: E0127 06:54:24.098461 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.112466 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.123954 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.124194 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.124230 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.124246 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.124282 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.124297 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:24Z","lastTransitionTime":"2026-01-27T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.146333 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.171876 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.208097 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.227125 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.227311 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.227687 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.227943 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.228119 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.228276 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:24Z","lastTransitionTime":"2026-01-27T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.244516 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.262822 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.284206 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.298826 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.314153 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.329613 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.331098 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.331131 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.331140 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.331156 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.331166 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:24Z","lastTransitionTime":"2026-01-27T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.343676 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.357541 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.376370 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522e4ac494f822dbc62e91c7c5d388bf7de9f51f787530861c68a1e9c9d843a3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:15Z\\\",\\\"message\\\":\\\"Sending *v1.Node event handler 2 for removal\\\\nI0127 06:54:15.213614 6067 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:15.213643 6067 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:54:15.213654 6067 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:15.213684 6067 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 06:54:15.213692 6067 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 06:54:15.213704 6067 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:54:15.213724 6067 factory.go:656] Stopping watch factory\\\\nI0127 06:54:15.213720 6067 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:15.213736 6067 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:15.213744 6067 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:15.213749 6067 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:15.214050 6067 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 06:54:15.214170 6067 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 06:54:15.214227 6067 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:54:15.214328 6067 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:54:15.214391 6067 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:16Z\\\",\\\"message\\\":\\\"event handler 2 for removal\\\\nI0127 06:54:16.365988 6195 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 06:54:16.366004 6195 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:54:16.366020 6195 factory.go:656] Stopping watch factory\\\\nI0127 06:54:16.366032 6195 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:54:16.366041 6195 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:16.366047 6195 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:16.366053 6195 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:16.366059 6195 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:16.366066 6195 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:16.366072 6195 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:16.366264 6195 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:54:16.366460 6195 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:54:16.366653 6195 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.389532 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.433218 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.433262 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.433272 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.433290 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.433301 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:24Z","lastTransitionTime":"2026-01-27T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.535902 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.535957 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.535966 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.535984 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.535994 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:24Z","lastTransitionTime":"2026-01-27T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.638419 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.638460 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.638470 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.638494 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.638505 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:24Z","lastTransitionTime":"2026-01-27T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.741249 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.741311 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.741330 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.741353 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.741367 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:24Z","lastTransitionTime":"2026-01-27T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.844291 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.844331 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.844340 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.844356 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.844366 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:24Z","lastTransitionTime":"2026-01-27T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.946937 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.946984 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.946999 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.947018 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:24 crc kubenswrapper[4872]: I0127 06:54:24.947031 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:24Z","lastTransitionTime":"2026-01-27T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.049689 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.049734 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.049747 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.049767 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.049779 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:25Z","lastTransitionTime":"2026-01-27T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.075286 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:13:46.136407447 +0000 UTC Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.097913 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:25 crc kubenswrapper[4872]: E0127 06:54:25.098127 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.153078 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.153129 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.153149 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.153172 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.153190 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:25Z","lastTransitionTime":"2026-01-27T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.256585 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.256631 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.256641 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.256659 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.256671 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:25Z","lastTransitionTime":"2026-01-27T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.359404 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.359458 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.359469 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.359490 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.359503 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:25Z","lastTransitionTime":"2026-01-27T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.461739 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.461807 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.461827 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.461887 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.461909 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:25Z","lastTransitionTime":"2026-01-27T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.564871 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.564929 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.564941 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.564959 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.564975 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:25Z","lastTransitionTime":"2026-01-27T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.667678 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.667724 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.667734 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.667750 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.667762 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:25Z","lastTransitionTime":"2026-01-27T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.771146 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.771219 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.771232 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.771279 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.771294 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:25Z","lastTransitionTime":"2026-01-27T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.874499 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.874550 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.874561 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.874579 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.874592 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:25Z","lastTransitionTime":"2026-01-27T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.977786 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.977863 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.977873 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.977890 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:25 crc kubenswrapper[4872]: I0127 06:54:25.977901 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:25Z","lastTransitionTime":"2026-01-27T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.076440 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:30:15.073061379 +0000 UTC Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.080884 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.080921 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.080932 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.080951 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.080962 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:26Z","lastTransitionTime":"2026-01-27T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.097230 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.097299 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:26 crc kubenswrapper[4872]: E0127 06:54:26.097415 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.097463 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:26 crc kubenswrapper[4872]: E0127 06:54:26.097578 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:26 crc kubenswrapper[4872]: E0127 06:54:26.097642 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.183501 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.183554 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.183568 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.183588 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.183602 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:26Z","lastTransitionTime":"2026-01-27T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.286228 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.286286 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.286299 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.286319 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.286708 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:26Z","lastTransitionTime":"2026-01-27T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.389690 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.389726 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.389735 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.389751 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.389763 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:26Z","lastTransitionTime":"2026-01-27T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.394175 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs\") pod \"network-metrics-daemon-nstjz\" (UID: \"f22e033f-46c7-4d57-a333-e1eee5cd3091\") " pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:26 crc kubenswrapper[4872]: E0127 06:54:26.394306 4872 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:54:26 crc kubenswrapper[4872]: E0127 06:54:26.394361 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs podName:f22e033f-46c7-4d57-a333-e1eee5cd3091 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:34.394341841 +0000 UTC m=+50.921817037 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs") pod "network-metrics-daemon-nstjz" (UID: "f22e033f-46c7-4d57-a333-e1eee5cd3091") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.492159 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.492203 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.492215 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.492235 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.492249 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:26Z","lastTransitionTime":"2026-01-27T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.595496 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.595541 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.595551 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.595568 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.595580 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:26Z","lastTransitionTime":"2026-01-27T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.698138 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.698188 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.698205 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.698236 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.698251 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:26Z","lastTransitionTime":"2026-01-27T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.801134 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.801178 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.801191 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.801215 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.801230 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:26Z","lastTransitionTime":"2026-01-27T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.904405 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.904451 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.904487 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.904509 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:26 crc kubenswrapper[4872]: I0127 06:54:26.904519 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:26Z","lastTransitionTime":"2026-01-27T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.006820 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.006877 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.006889 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.006905 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.006915 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:27Z","lastTransitionTime":"2026-01-27T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.076907 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 23:40:48.397080161 +0000 UTC Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.097565 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:27 crc kubenswrapper[4872]: E0127 06:54:27.097770 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.109438 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.109492 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.109506 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.109524 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.109536 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:27Z","lastTransitionTime":"2026-01-27T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.211897 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.212179 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.212261 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.212334 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.212460 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:27Z","lastTransitionTime":"2026-01-27T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.315250 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.315330 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.315359 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.315391 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.315410 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:27Z","lastTransitionTime":"2026-01-27T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.418530 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.418977 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.419063 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.419136 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.419198 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:27Z","lastTransitionTime":"2026-01-27T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.522179 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.522222 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.522233 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.522256 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.522276 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:27Z","lastTransitionTime":"2026-01-27T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.624448 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.624495 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.624613 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.624636 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.624646 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:27Z","lastTransitionTime":"2026-01-27T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.727796 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.728572 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.728671 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.728768 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.728869 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:27Z","lastTransitionTime":"2026-01-27T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.831817 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.831885 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.831896 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.831954 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.831967 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:27Z","lastTransitionTime":"2026-01-27T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.934392 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.934475 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.934498 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.934528 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:27 crc kubenswrapper[4872]: I0127 06:54:27.934551 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:27Z","lastTransitionTime":"2026-01-27T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.037507 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.037555 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.037566 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.037586 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.037599 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:28Z","lastTransitionTime":"2026-01-27T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.077735 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:08:10.770167596 +0000 UTC Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.097495 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.097541 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.097499 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:28 crc kubenswrapper[4872]: E0127 06:54:28.097657 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:28 crc kubenswrapper[4872]: E0127 06:54:28.097816 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:28 crc kubenswrapper[4872]: E0127 06:54:28.098019 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.140553 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.140602 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.140616 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.140636 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.140652 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:28Z","lastTransitionTime":"2026-01-27T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.243425 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.243501 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.243512 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.243529 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.243540 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:28Z","lastTransitionTime":"2026-01-27T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.346473 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.346518 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.346530 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.346550 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.346564 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:28Z","lastTransitionTime":"2026-01-27T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.449040 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.449107 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.449120 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.449144 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.449160 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:28Z","lastTransitionTime":"2026-01-27T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.555638 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.555681 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.555692 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.555710 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.555722 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:28Z","lastTransitionTime":"2026-01-27T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.658637 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.658683 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.658694 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.658709 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.658719 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:28Z","lastTransitionTime":"2026-01-27T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.762174 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.762227 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.762245 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.762271 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.762288 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:28Z","lastTransitionTime":"2026-01-27T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.865241 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.865291 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.865304 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.865323 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.865337 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:28Z","lastTransitionTime":"2026-01-27T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.968711 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.968785 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.968799 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.968816 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:28 crc kubenswrapper[4872]: I0127 06:54:28.968827 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:28Z","lastTransitionTime":"2026-01-27T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.071508 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.071547 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.071557 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.071577 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.071587 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:29Z","lastTransitionTime":"2026-01-27T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.078824 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:49:32.065524593 +0000 UTC Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.097213 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:29 crc kubenswrapper[4872]: E0127 06:54:29.097395 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.174403 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.174453 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.174462 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.174490 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.174500 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:29Z","lastTransitionTime":"2026-01-27T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.277391 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.277443 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.277455 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.277473 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.277487 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:29Z","lastTransitionTime":"2026-01-27T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.380151 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.380698 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.380726 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.380750 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.380769 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:29Z","lastTransitionTime":"2026-01-27T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.484952 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.485004 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.485020 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.485043 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.485055 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:29Z","lastTransitionTime":"2026-01-27T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.588286 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.588329 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.588338 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.588354 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.588362 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:29Z","lastTransitionTime":"2026-01-27T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.691338 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.691386 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.691402 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.691420 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.691432 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:29Z","lastTransitionTime":"2026-01-27T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.794313 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.794360 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.794370 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.794390 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.794400 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:29Z","lastTransitionTime":"2026-01-27T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.897437 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.897488 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.897500 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.897519 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:29 crc kubenswrapper[4872]: I0127 06:54:29.897976 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:29Z","lastTransitionTime":"2026-01-27T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.000657 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.000740 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.000752 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.000769 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.000780 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:30Z","lastTransitionTime":"2026-01-27T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.079526 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 10:55:32.657897871 +0000 UTC Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.094270 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.094313 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.094322 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.094339 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.094349 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:30Z","lastTransitionTime":"2026-01-27T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.097402 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:30 crc kubenswrapper[4872]: E0127 06:54:30.097708 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.097482 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:30 crc kubenswrapper[4872]: E0127 06:54:30.097918 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.097430 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:30 crc kubenswrapper[4872]: E0127 06:54:30.098079 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:30 crc kubenswrapper[4872]: E0127 06:54:30.110100 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.115456 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.115486 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.115495 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.115562 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.115576 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:30Z","lastTransitionTime":"2026-01-27T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:30 crc kubenswrapper[4872]: E0127 06:54:30.129978 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.135513 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.135580 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.135592 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.135634 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.135647 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:30Z","lastTransitionTime":"2026-01-27T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:30 crc kubenswrapper[4872]: E0127 06:54:30.150343 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.154982 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.155167 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.155251 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.155320 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.155396 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:30Z","lastTransitionTime":"2026-01-27T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:30 crc kubenswrapper[4872]: E0127 06:54:30.169036 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.173697 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.173727 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.173736 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.173780 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.173797 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:30Z","lastTransitionTime":"2026-01-27T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:30 crc kubenswrapper[4872]: E0127 06:54:30.195169 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:30 crc kubenswrapper[4872]: E0127 06:54:30.195294 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.197661 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.197771 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.197892 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.198012 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.198108 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:30Z","lastTransitionTime":"2026-01-27T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.300573 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.300622 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.300638 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.300658 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.300669 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:30Z","lastTransitionTime":"2026-01-27T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.403749 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.403793 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.403805 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.403824 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.403863 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:30Z","lastTransitionTime":"2026-01-27T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.506132 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.506416 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.506506 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.506598 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.506684 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:30Z","lastTransitionTime":"2026-01-27T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.610055 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.610110 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.610120 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.610139 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.610152 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:30Z","lastTransitionTime":"2026-01-27T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.713244 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.713655 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.713794 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.713966 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.714104 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:30Z","lastTransitionTime":"2026-01-27T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.818123 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.818689 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.818929 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.819124 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.819372 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:30Z","lastTransitionTime":"2026-01-27T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.893941 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.894816 4872 scope.go:117] "RemoveContainer" containerID="67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.912302 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.921681 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.921720 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.921730 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.921746 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.921758 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:30Z","lastTransitionTime":"2026-01-27T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.930587 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.948265 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.972664 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:16Z\\\",\\\"message\\\":\\\"event handler 2 for removal\\\\nI0127 06:54:16.365988 6195 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 06:54:16.366004 6195 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:54:16.366020 6195 factory.go:656] Stopping watch factory\\\\nI0127 06:54:16.366032 6195 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:54:16.366041 6195 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:16.366047 6195 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:16.366053 6195 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:16.366059 6195 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:16.366066 6195 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:16.366072 6195 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:16.366264 6195 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:54:16.366460 6195 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:54:16.366653 6195 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.984757 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:30 crc kubenswrapper[4872]: I0127 06:54:30.998237 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.010457 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.024412 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.024454 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.024467 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.024487 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.024501 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:31Z","lastTransitionTime":"2026-01-27T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.030297 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.045675 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.060723 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.075158 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.080122 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 17:37:59.902823601 +0000 UTC Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.089960 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.097311 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:31 crc kubenswrapper[4872]: E0127 06:54:31.097479 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.102647 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.114989 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.126535 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.126568 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.126579 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.126599 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.126610 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:31Z","lastTransitionTime":"2026-01-27T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.126720 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.138508 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.229290 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.229332 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.229344 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.229366 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.229382 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:31Z","lastTransitionTime":"2026-01-27T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.331913 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.331956 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.331965 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.331985 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.331999 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:31Z","lastTransitionTime":"2026-01-27T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.434537 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.434589 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.434599 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.434617 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.434630 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:31Z","lastTransitionTime":"2026-01-27T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.466688 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovnkube-controller/1.log" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.469290 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerStarted","Data":"de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798"} Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.469951 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.485018 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.500983 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.519753 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.538644 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.538694 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.538708 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.538728 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.538742 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:31Z","lastTransitionTime":"2026-01-27T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.548701 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:16Z\\\",\\\"message\\\":\\\"event handler 2 for removal\\\\nI0127 06:54:16.365988 6195 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 06:54:16.366004 6195 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:54:16.366020 6195 factory.go:656] Stopping watch factory\\\\nI0127 06:54:16.366032 6195 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:54:16.366041 6195 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:16.366047 6195 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:16.366053 6195 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:16.366059 6195 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:16.366066 6195 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:16.366072 6195 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:16.366264 6195 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:54:16.366460 6195 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:54:16.366653 6195 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.559188 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.571863 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.586353 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.602411 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.619744 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.634590 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.641011 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.641061 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.641075 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.641097 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.641112 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:31Z","lastTransitionTime":"2026-01-27T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.649107 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.665815 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.680337 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.697757 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.710713 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.722942 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.744112 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.744146 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.744154 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.744170 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.744182 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:31Z","lastTransitionTime":"2026-01-27T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.847244 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.847501 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.847513 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.847531 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.847546 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:31Z","lastTransitionTime":"2026-01-27T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.950508 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.950548 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.950558 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.950608 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:31 crc kubenswrapper[4872]: I0127 06:54:31.950622 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:31Z","lastTransitionTime":"2026-01-27T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.053663 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.053741 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.053754 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.053775 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.053789 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:32Z","lastTransitionTime":"2026-01-27T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.080270 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 02:17:32.934153481 +0000 UTC Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.097732 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:32 crc kubenswrapper[4872]: E0127 06:54:32.097976 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.098209 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:32 crc kubenswrapper[4872]: E0127 06:54:32.098392 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.098713 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:32 crc kubenswrapper[4872]: E0127 06:54:32.098858 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.157106 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.157144 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.157161 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.157181 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.157193 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:32Z","lastTransitionTime":"2026-01-27T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.259566 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.259609 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.259619 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.259641 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.259652 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:32Z","lastTransitionTime":"2026-01-27T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.361776 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.361828 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.361855 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.361874 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.361888 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:32Z","lastTransitionTime":"2026-01-27T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.461147 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.464892 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.464936 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.464949 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.464968 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.464979 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:32Z","lastTransitionTime":"2026-01-27T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.470499 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.474232 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovnkube-controller/2.log" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.474896 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovnkube-controller/1.log" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.477618 4872 generic.go:334] "Generic (PLEG): container finished" podID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerID="de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798" exitCode=1 Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.477681 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerDied","Data":"de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798"} Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.477727 4872 scope.go:117] "RemoveContainer" containerID="67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.478604 4872 scope.go:117] "RemoveContainer" containerID="de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798" Jan 27 06:54:32 crc kubenswrapper[4872]: E0127 06:54:32.478763 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.480181 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.500097 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.509764 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.521430 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.532498 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.545102 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.564051 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:16Z\\\",\\\"message\\\":\\\"event handler 2 for removal\\\\nI0127 06:54:16.365988 6195 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 06:54:16.366004 6195 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:54:16.366020 6195 factory.go:656] Stopping watch factory\\\\nI0127 06:54:16.366032 6195 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:54:16.366041 6195 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:16.366047 6195 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:16.366053 6195 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:16.366059 6195 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:16.366066 6195 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:16.366072 6195 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:16.366264 6195 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:54:16.366460 6195 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:54:16.366653 6195 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.567758 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.567806 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.567816 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.567835 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.567856 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:32Z","lastTransitionTime":"2026-01-27T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.576718 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.594083 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.604406 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.622096 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.636616 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.651278 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.665011 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.670473 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.670510 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.670520 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.670538 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.670552 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:32Z","lastTransitionTime":"2026-01-27T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.677975 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.693530 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.707926 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84772df7-3165-4527-b995-3d047610efe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f7a74a15b1cbc8629163aad3dccab4a717289d1f60a110fc7eb138b270e3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bacf5c751024ce9db129ca3707df22d9f73e42434168cdda4e48145e96bcdd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc80002d1a062927c4a493308999190edc5cadd30fd7fc5af2f7ed7920c3ede9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.723116 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.737285 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.750102 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.766593 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.772835 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.772895 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.772907 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.772925 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.772937 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:32Z","lastTransitionTime":"2026-01-27T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.780125 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.795376 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.819074 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67ec3fe62efe8b2fafa3d1742cba881035c491d3165b1e53ff548513ee882224\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:16Z\\\",\\\"message\\\":\\\"event handler 2 for removal\\\\nI0127 06:54:16.365988 6195 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 06:54:16.366004 6195 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:54:16.366020 6195 factory.go:656] Stopping watch factory\\\\nI0127 06:54:16.366032 6195 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:54:16.366041 6195 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:54:16.366047 6195 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 06:54:16.366053 6195 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:54:16.366059 6195 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 06:54:16.366066 6195 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:54:16.366072 6195 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:54:16.366264 6195 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:54:16.366460 6195 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 06:54:16.366653 6195 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:31Z\\\",\\\"message\\\":\\\"8] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 06:54:31.792296 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}\\\\nI0127 06:54:31.795052 6402 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 4.135764ms\\\\nI0127 06:54:31.795087 6402 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 06:54:31.795086 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0127 06:54:31.795136 6402 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.018065ms\\\\nF0127 06:54:31.795151 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.831833 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.847670 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.861573 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.875710 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.875752 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.875764 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.875780 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.875791 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:32Z","lastTransitionTime":"2026-01-27T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.878500 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.893160 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.907631 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.920217 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.932068 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.948576 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.978191 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.978259 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.978277 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.978301 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:32 crc kubenswrapper[4872]: I0127 06:54:32.978316 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:32Z","lastTransitionTime":"2026-01-27T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.080598 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 10:01:03.436821575 +0000 UTC Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.081262 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.081310 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.081320 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.081343 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.081355 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:33Z","lastTransitionTime":"2026-01-27T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.097720 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:33 crc kubenswrapper[4872]: E0127 06:54:33.097932 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.184942 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.184990 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.185004 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.185022 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.185035 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:33Z","lastTransitionTime":"2026-01-27T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.287308 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.287362 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.287373 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.287399 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.287413 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:33Z","lastTransitionTime":"2026-01-27T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.390044 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.390117 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.390131 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.390151 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.390165 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:33Z","lastTransitionTime":"2026-01-27T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.483114 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovnkube-controller/2.log" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.486277 4872 scope.go:117] "RemoveContainer" containerID="de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798" Jan 27 06:54:33 crc kubenswrapper[4872]: E0127 06:54:33.486445 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.492224 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.492261 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.492270 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.492286 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.492296 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:33Z","lastTransitionTime":"2026-01-27T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.500736 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.512433 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.528072 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.544702 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.561328 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.574401 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.595100 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.595140 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.595149 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.595166 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.595175 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:33Z","lastTransitionTime":"2026-01-27T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.598200 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.610890 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.622328 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.634293 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84772df7-3165-4527-b995-3d047610efe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f7a74a15b1cbc8629163aad3dccab4a717289d1f60a110fc7eb138b270e3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bacf5c751024ce9db129ca3707df22d9f73e42434168cdda4e48145e96bcdd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc80002d1a062927c4a493308999190edc5cadd30fd7fc5af2f7ed7920c3ede9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.647647 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.660170 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.673002 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.687251 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.698104 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.698177 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.698193 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.698216 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.698231 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:33Z","lastTransitionTime":"2026-01-27T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.700063 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.715021 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.737819 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:31Z\\\",\\\"message\\\":\\\"8] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 06:54:31.792296 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}\\\\nI0127 06:54:31.795052 6402 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 4.135764ms\\\\nI0127 06:54:31.795087 6402 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 06:54:31.795086 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0127 06:54:31.795136 6402 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.018065ms\\\\nF0127 06:54:31.795151 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.801202 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.801273 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.801288 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.801711 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.801766 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:33Z","lastTransitionTime":"2026-01-27T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.904762 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.904812 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.904821 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.904838 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:33 crc kubenswrapper[4872]: I0127 06:54:33.904863 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:33Z","lastTransitionTime":"2026-01-27T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.007971 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.008017 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.008026 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.008044 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.008054 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:34Z","lastTransitionTime":"2026-01-27T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.081735 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 03:09:37.208234275 +0000 UTC Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.097411 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.097503 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.097450 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:34 crc kubenswrapper[4872]: E0127 06:54:34.097823 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:34 crc kubenswrapper[4872]: E0127 06:54:34.097943 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:34 crc kubenswrapper[4872]: E0127 06:54:34.098115 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.110411 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.110455 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.110467 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.110485 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.110500 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:34Z","lastTransitionTime":"2026-01-27T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.111891 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.124594 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.140211 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.155020 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.167407 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.179928 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.192053 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.209047 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.214311 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.214372 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.214386 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.214406 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.214422 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:34Z","lastTransitionTime":"2026-01-27T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.222194 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84772df7-3165-4527-b995-3d047610efe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f7a74a15b1cbc8629163aad3dccab4a717289d1f60a110fc7eb138b270e3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bacf5c751024ce9db129ca3707df22d9f73e42434168cdda4e48145e96bcdd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc80002d1a062927c4a493308999190edc5cadd30fd7fc5af2f7ed7920c3ede9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.235876 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.247764 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.259373 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.272648 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.284177 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.298979 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.317578 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.317621 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.317654 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.317672 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.317683 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:34Z","lastTransitionTime":"2026-01-27T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.319267 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:31Z\\\",\\\"message\\\":\\\"8] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 06:54:31.792296 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}\\\\nI0127 06:54:31.795052 6402 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 4.135764ms\\\\nI0127 06:54:31.795087 6402 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 06:54:31.795086 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0127 06:54:31.795136 6402 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.018065ms\\\\nF0127 06:54:31.795151 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.332632 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.420020 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.420072 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.420083 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.420105 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.420117 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:34Z","lastTransitionTime":"2026-01-27T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.494270 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs\") pod \"network-metrics-daemon-nstjz\" (UID: \"f22e033f-46c7-4d57-a333-e1eee5cd3091\") " pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:34 crc kubenswrapper[4872]: E0127 06:54:34.494499 4872 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:54:34 crc kubenswrapper[4872]: E0127 06:54:34.494574 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs podName:f22e033f-46c7-4d57-a333-e1eee5cd3091 nodeName:}" failed. No retries permitted until 2026-01-27 06:54:50.494550035 +0000 UTC m=+67.022025231 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs") pod "network-metrics-daemon-nstjz" (UID: "f22e033f-46c7-4d57-a333-e1eee5cd3091") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.523811 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.523868 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.523879 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.523896 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.523906 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:34Z","lastTransitionTime":"2026-01-27T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.626576 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.626874 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.627033 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.627128 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.627211 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:34Z","lastTransitionTime":"2026-01-27T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.730438 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.730497 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.730507 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.730525 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.730536 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:34Z","lastTransitionTime":"2026-01-27T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.832555 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.832864 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.832933 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.832994 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.833072 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:34Z","lastTransitionTime":"2026-01-27T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.935826 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.935895 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.935914 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.935936 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:34 crc kubenswrapper[4872]: I0127 06:54:34.935950 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:34Z","lastTransitionTime":"2026-01-27T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.038422 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.038459 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.038468 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.038486 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.038497 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:35Z","lastTransitionTime":"2026-01-27T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.082435 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 10:43:40.753870041 +0000 UTC Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.097906 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:35 crc kubenswrapper[4872]: E0127 06:54:35.098373 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.141066 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.141356 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.141516 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.141585 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.141658 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:35Z","lastTransitionTime":"2026-01-27T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.244165 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.244208 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.244219 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.244236 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.244248 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:35Z","lastTransitionTime":"2026-01-27T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.346766 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.347050 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.347113 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.347189 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.347310 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:35Z","lastTransitionTime":"2026-01-27T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.450589 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.450638 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.450653 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.450674 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.450688 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:35Z","lastTransitionTime":"2026-01-27T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.554152 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.554235 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.554246 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.554266 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.554281 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:35Z","lastTransitionTime":"2026-01-27T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.656763 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.656808 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.656821 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.656881 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.656908 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:35Z","lastTransitionTime":"2026-01-27T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.760057 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.760164 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.760176 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.760199 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.760212 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:35Z","lastTransitionTime":"2026-01-27T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.863813 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.864104 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.864122 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.864147 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.864165 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:35Z","lastTransitionTime":"2026-01-27T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.912110 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:35 crc kubenswrapper[4872]: E0127 06:54:35.912268 4872 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:54:35 crc kubenswrapper[4872]: E0127 06:54:35.912356 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:55:07.91233622 +0000 UTC m=+84.439811416 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.967157 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.967193 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.967200 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.967217 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:35 crc kubenswrapper[4872]: I0127 06:54:35.967227 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:35Z","lastTransitionTime":"2026-01-27T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.013207 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.013369 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.013409 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.013450 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:36 crc kubenswrapper[4872]: E0127 06:54:36.013558 4872 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:54:36 crc kubenswrapper[4872]: E0127 06:54:36.013649 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:54:36 crc kubenswrapper[4872]: E0127 06:54:36.013649 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:55:08.013610042 +0000 UTC m=+84.541085238 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:54:36 crc kubenswrapper[4872]: E0127 06:54:36.013666 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:54:36 crc kubenswrapper[4872]: E0127 06:54:36.013710 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:54:36 crc kubenswrapper[4872]: E0127 06:54:36.013715 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:55:08.013696014 +0000 UTC m=+84.541171210 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:54:36 crc kubenswrapper[4872]: E0127 06:54:36.013724 4872 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:36 crc kubenswrapper[4872]: E0127 06:54:36.013826 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:55:08.013800888 +0000 UTC m=+84.541276084 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:36 crc kubenswrapper[4872]: E0127 06:54:36.013661 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:54:36 crc kubenswrapper[4872]: E0127 06:54:36.013875 4872 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:36 crc kubenswrapper[4872]: E0127 06:54:36.013906 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:55:08.01390026 +0000 UTC m=+84.541375456 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.070782 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.070887 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.070902 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.070925 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.070938 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:36Z","lastTransitionTime":"2026-01-27T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.083203 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:54:15.018214821 +0000 UTC Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.097684 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.097732 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:36 crc kubenswrapper[4872]: E0127 06:54:36.097960 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:36 crc kubenswrapper[4872]: E0127 06:54:36.098131 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.098549 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:36 crc kubenswrapper[4872]: E0127 06:54:36.098637 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.173427 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.173468 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.173478 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.173497 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.173510 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:36Z","lastTransitionTime":"2026-01-27T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.276232 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.276701 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.276859 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.276984 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.277084 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:36Z","lastTransitionTime":"2026-01-27T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.381969 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.382007 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.382018 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.382035 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.382049 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:36Z","lastTransitionTime":"2026-01-27T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.484758 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.484801 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.484812 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.484831 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.484860 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:36Z","lastTransitionTime":"2026-01-27T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.588137 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.588170 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.588179 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.588194 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.588204 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:36Z","lastTransitionTime":"2026-01-27T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.690982 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.691036 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.691047 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.691066 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.691080 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:36Z","lastTransitionTime":"2026-01-27T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.793466 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.793519 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.793532 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.793550 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.793566 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:36Z","lastTransitionTime":"2026-01-27T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.896929 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.896979 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.896994 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.897016 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:36 crc kubenswrapper[4872]: I0127 06:54:36.897030 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:36Z","lastTransitionTime":"2026-01-27T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.000461 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.000506 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.000519 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.000537 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.000549 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:37Z","lastTransitionTime":"2026-01-27T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.083660 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:57:02.590774111 +0000 UTC Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.098114 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:37 crc kubenswrapper[4872]: E0127 06:54:37.098318 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.103568 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.103606 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.103615 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.103631 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.103643 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:37Z","lastTransitionTime":"2026-01-27T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.206318 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.206365 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.206375 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.206395 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.206406 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:37Z","lastTransitionTime":"2026-01-27T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.309424 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.309480 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.309492 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.309527 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.309538 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:37Z","lastTransitionTime":"2026-01-27T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.412949 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.413008 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.413022 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.413039 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.413055 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:37Z","lastTransitionTime":"2026-01-27T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.514767 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.514817 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.514828 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.514867 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.514882 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:37Z","lastTransitionTime":"2026-01-27T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.618736 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.618799 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.618812 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.618890 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.618906 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:37Z","lastTransitionTime":"2026-01-27T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.721748 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.721783 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.721795 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.721812 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.721823 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:37Z","lastTransitionTime":"2026-01-27T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.824760 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.824815 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.824832 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.824870 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.824886 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:37Z","lastTransitionTime":"2026-01-27T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.927375 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.927418 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.927426 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.927443 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:37 crc kubenswrapper[4872]: I0127 06:54:37.927453 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:37Z","lastTransitionTime":"2026-01-27T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.030044 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.030096 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.030108 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.030126 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.030139 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:38Z","lastTransitionTime":"2026-01-27T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.084620 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 01:50:28.034015787 +0000 UTC Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.098138 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.098169 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.098150 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:38 crc kubenswrapper[4872]: E0127 06:54:38.098289 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:38 crc kubenswrapper[4872]: E0127 06:54:38.098361 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:38 crc kubenswrapper[4872]: E0127 06:54:38.098426 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.132160 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.132221 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.132238 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.132259 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.132276 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:38Z","lastTransitionTime":"2026-01-27T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.235897 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.235937 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.235946 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.235963 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.235975 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:38Z","lastTransitionTime":"2026-01-27T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.338111 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.338146 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.338156 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.338173 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.338183 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:38Z","lastTransitionTime":"2026-01-27T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.440885 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.440927 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.440936 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.440952 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.440964 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:38Z","lastTransitionTime":"2026-01-27T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.550190 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.550276 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.550306 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.550329 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.550343 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:38Z","lastTransitionTime":"2026-01-27T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.653034 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.653094 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.653104 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.653126 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.653139 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:38Z","lastTransitionTime":"2026-01-27T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.756012 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.756054 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.756063 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.756079 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.756089 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:38Z","lastTransitionTime":"2026-01-27T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.858803 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.858868 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.858880 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.859008 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.859032 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:38Z","lastTransitionTime":"2026-01-27T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.961624 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.961673 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.961690 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.961709 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:38 crc kubenswrapper[4872]: I0127 06:54:38.961721 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:38Z","lastTransitionTime":"2026-01-27T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.065508 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.065552 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.065564 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.065583 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.065595 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:39Z","lastTransitionTime":"2026-01-27T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.085051 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:37:38.144137857 +0000 UTC Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.097460 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:39 crc kubenswrapper[4872]: E0127 06:54:39.097893 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.168132 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.168194 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.168211 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.168232 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.168242 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:39Z","lastTransitionTime":"2026-01-27T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.271965 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.272000 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.272010 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.272027 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.272037 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:39Z","lastTransitionTime":"2026-01-27T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.375001 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.375037 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.375049 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.375066 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.375074 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:39Z","lastTransitionTime":"2026-01-27T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.477084 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.477113 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.477122 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.477140 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.477153 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:39Z","lastTransitionTime":"2026-01-27T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.581258 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.581317 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.581326 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.581950 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.582243 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:39Z","lastTransitionTime":"2026-01-27T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.686631 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.686699 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.686710 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.686729 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.686741 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:39Z","lastTransitionTime":"2026-01-27T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.789509 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.789551 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.789567 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.789586 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.789597 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:39Z","lastTransitionTime":"2026-01-27T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.891978 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.892141 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.892209 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.892298 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.892387 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:39Z","lastTransitionTime":"2026-01-27T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.995641 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.995938 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.996023 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.996110 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:39 crc kubenswrapper[4872]: I0127 06:54:39.996173 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:39Z","lastTransitionTime":"2026-01-27T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.086151 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:19:04.554838484 +0000 UTC Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.097516 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.097557 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:40 crc kubenswrapper[4872]: E0127 06:54:40.097671 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.097726 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:40 crc kubenswrapper[4872]: E0127 06:54:40.097821 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:40 crc kubenswrapper[4872]: E0127 06:54:40.097901 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.099032 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.099059 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.099071 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.099086 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.099100 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:40Z","lastTransitionTime":"2026-01-27T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.201083 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.201122 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.201133 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.201156 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.201168 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:40Z","lastTransitionTime":"2026-01-27T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.303626 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.303663 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.303671 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.303687 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.303697 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:40Z","lastTransitionTime":"2026-01-27T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.407264 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.407320 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.407331 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.407350 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.407362 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:40Z","lastTransitionTime":"2026-01-27T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.411379 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.411409 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.411428 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.411442 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.411453 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:40Z","lastTransitionTime":"2026-01-27T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:40 crc kubenswrapper[4872]: E0127 06:54:40.435097 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.447102 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.447141 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.447151 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.447165 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.447176 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:40Z","lastTransitionTime":"2026-01-27T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:40 crc kubenswrapper[4872]: E0127 06:54:40.459876 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.467901 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.467977 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.467991 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.468015 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.468031 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:40Z","lastTransitionTime":"2026-01-27T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:40 crc kubenswrapper[4872]: E0127 06:54:40.495963 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.530062 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.530117 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.530130 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.530149 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.530171 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:40Z","lastTransitionTime":"2026-01-27T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:40 crc kubenswrapper[4872]: E0127 06:54:40.546512 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.550667 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.550723 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.550737 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.550756 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.550768 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:40Z","lastTransitionTime":"2026-01-27T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:40 crc kubenswrapper[4872]: E0127 06:54:40.564285 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:40 crc kubenswrapper[4872]: E0127 06:54:40.564403 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.568529 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.568784 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.568897 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.569004 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.569103 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:40Z","lastTransitionTime":"2026-01-27T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.671506 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.671567 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.671580 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.671596 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.671607 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:40Z","lastTransitionTime":"2026-01-27T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.774670 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.774725 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.774735 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.774751 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.774762 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:40Z","lastTransitionTime":"2026-01-27T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.876703 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.876986 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.877064 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.877171 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.877268 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:40Z","lastTransitionTime":"2026-01-27T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.979274 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.979524 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.979590 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.979655 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:40 crc kubenswrapper[4872]: I0127 06:54:40.979715 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:40Z","lastTransitionTime":"2026-01-27T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.081635 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.081707 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.081721 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.081737 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.081751 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:41Z","lastTransitionTime":"2026-01-27T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.087025 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:15:43.23158057 +0000 UTC Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.097364 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:41 crc kubenswrapper[4872]: E0127 06:54:41.097674 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.184326 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.184601 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.184668 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.184745 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.184821 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:41Z","lastTransitionTime":"2026-01-27T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.287692 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.287730 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.287739 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.287753 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.287761 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:41Z","lastTransitionTime":"2026-01-27T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.396390 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.396490 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.396516 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.396552 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.396580 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:41Z","lastTransitionTime":"2026-01-27T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.499647 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.499688 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.499700 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.499720 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.499733 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:41Z","lastTransitionTime":"2026-01-27T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.602051 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.602093 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.602104 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.602120 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.602132 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:41Z","lastTransitionTime":"2026-01-27T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.704680 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.704728 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.704737 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.704752 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.704763 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:41Z","lastTransitionTime":"2026-01-27T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.807618 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.807700 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.807715 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.807733 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.807746 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:41Z","lastTransitionTime":"2026-01-27T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.910645 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.910679 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.910688 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.910700 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:41 crc kubenswrapper[4872]: I0127 06:54:41.910709 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:41Z","lastTransitionTime":"2026-01-27T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.013089 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.013145 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.013161 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.013182 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.013198 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:42Z","lastTransitionTime":"2026-01-27T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.087991 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 04:32:12.714175745 +0000 UTC Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.098040 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.098078 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:42 crc kubenswrapper[4872]: E0127 06:54:42.098160 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.098040 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:42 crc kubenswrapper[4872]: E0127 06:54:42.098238 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:42 crc kubenswrapper[4872]: E0127 06:54:42.098305 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.115455 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.115508 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.115523 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.115543 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.115557 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:42Z","lastTransitionTime":"2026-01-27T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.217993 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.218034 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.218043 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.218057 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.218069 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:42Z","lastTransitionTime":"2026-01-27T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.320015 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.320044 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.320053 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.320065 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.320074 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:42Z","lastTransitionTime":"2026-01-27T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.421936 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.421973 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.421996 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.422010 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.422020 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:42Z","lastTransitionTime":"2026-01-27T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.525151 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.525203 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.525215 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.525232 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.525244 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:42Z","lastTransitionTime":"2026-01-27T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.627630 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.627682 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.627702 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.627719 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.627730 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:42Z","lastTransitionTime":"2026-01-27T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.730465 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.730719 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.730794 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.730914 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.731020 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:42Z","lastTransitionTime":"2026-01-27T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.833287 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.833364 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.833378 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.833410 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.833428 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:42Z","lastTransitionTime":"2026-01-27T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.935689 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.935998 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.936107 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.936215 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:42 crc kubenswrapper[4872]: I0127 06:54:42.936315 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:42Z","lastTransitionTime":"2026-01-27T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.038981 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.039022 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.039032 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.039046 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.039056 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:43Z","lastTransitionTime":"2026-01-27T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.088292 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:57:24.930084796 +0000 UTC Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.097023 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:43 crc kubenswrapper[4872]: E0127 06:54:43.097217 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.141299 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.141364 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.141376 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.141390 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.141399 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:43Z","lastTransitionTime":"2026-01-27T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.243856 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.243895 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.243908 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.243924 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.243937 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:43Z","lastTransitionTime":"2026-01-27T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.346672 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.347112 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.347341 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.347454 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.347543 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:43Z","lastTransitionTime":"2026-01-27T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.450005 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.450035 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.450044 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.450056 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.450066 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:43Z","lastTransitionTime":"2026-01-27T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.552430 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.552682 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.552749 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.552816 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.552899 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:43Z","lastTransitionTime":"2026-01-27T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.654870 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.654910 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.654919 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.654935 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.654946 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:43Z","lastTransitionTime":"2026-01-27T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.759182 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.759219 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.759231 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.759246 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.759256 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:43Z","lastTransitionTime":"2026-01-27T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.861248 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.861292 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.861302 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.861319 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.861330 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:43Z","lastTransitionTime":"2026-01-27T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.964932 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.964970 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.964980 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.964996 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:43 crc kubenswrapper[4872]: I0127 06:54:43.965005 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:43Z","lastTransitionTime":"2026-01-27T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.067479 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.067939 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.067951 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.067970 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.067981 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:44Z","lastTransitionTime":"2026-01-27T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.088897 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:55:47.774853217 +0000 UTC Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.097546 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.097630 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.097734 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:44 crc kubenswrapper[4872]: E0127 06:54:44.097730 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:44 crc kubenswrapper[4872]: E0127 06:54:44.097812 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:44 crc kubenswrapper[4872]: E0127 06:54:44.097911 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.109532 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.122049 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.135369 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.147147 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.170418 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.171521 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.171570 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.171583 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.171602 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.171613 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:44Z","lastTransitionTime":"2026-01-27T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.182601 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.197205 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.214185 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.224317 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84772df7-3165-4527-b995-3d047610efe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f7a74a15b1cbc8629163aad3dccab4a717289d1f60a110fc7eb138b270e3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bacf5c751024ce9db129ca3707df22d9f73e42434168cdda4e48145e96bcdd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc80002d1a062927c4a493308999190edc5cadd30fd7fc5af2f7ed7920c3ede9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.235989 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.246826 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.256361 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.270366 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.277787 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.277884 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.277899 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.277913 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.277922 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:44Z","lastTransitionTime":"2026-01-27T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.284287 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.303163 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:31Z\\\",\\\"message\\\":\\\"8] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 06:54:31.792296 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}\\\\nI0127 06:54:31.795052 6402 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 4.135764ms\\\\nI0127 06:54:31.795087 6402 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 06:54:31.795086 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0127 06:54:31.795136 6402 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.018065ms\\\\nF0127 06:54:31.795151 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.312872 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.323861 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.380908 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.380969 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.380981 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.380997 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.381006 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:44Z","lastTransitionTime":"2026-01-27T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.483956 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.483994 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.484003 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.484021 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.484031 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:44Z","lastTransitionTime":"2026-01-27T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.587248 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.587299 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.587312 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.587330 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.587340 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:44Z","lastTransitionTime":"2026-01-27T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.689745 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.689779 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.689790 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.689804 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.689816 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:44Z","lastTransitionTime":"2026-01-27T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.791567 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.791605 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.791615 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.791660 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.791672 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:44Z","lastTransitionTime":"2026-01-27T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.895338 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.895378 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.895388 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.895403 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.895412 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:44Z","lastTransitionTime":"2026-01-27T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.998557 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.998603 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.998614 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.998631 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:44 crc kubenswrapper[4872]: I0127 06:54:44.998642 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:44Z","lastTransitionTime":"2026-01-27T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.089885 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 19:09:25.600606203 +0000 UTC Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.097329 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:45 crc kubenswrapper[4872]: E0127 06:54:45.097596 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.100898 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.101030 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.101048 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.101063 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.101073 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:45Z","lastTransitionTime":"2026-01-27T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.203665 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.203712 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.203735 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.203752 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.203762 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:45Z","lastTransitionTime":"2026-01-27T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.306286 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.306330 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.306340 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.306355 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.306366 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:45Z","lastTransitionTime":"2026-01-27T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.410066 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.410110 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.410122 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.410140 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.410151 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:45Z","lastTransitionTime":"2026-01-27T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.512225 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.512273 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.512284 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.512300 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.512311 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:45Z","lastTransitionTime":"2026-01-27T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.613721 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.613763 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.613774 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.613789 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.613800 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:45Z","lastTransitionTime":"2026-01-27T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.716522 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.716579 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.716603 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.716629 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.716650 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:45Z","lastTransitionTime":"2026-01-27T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.820418 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.820469 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.820486 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.820519 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.820553 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:45Z","lastTransitionTime":"2026-01-27T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.922543 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.922584 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.922593 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.922606 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:45 crc kubenswrapper[4872]: I0127 06:54:45.922615 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:45Z","lastTransitionTime":"2026-01-27T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.025674 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.025719 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.025730 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.025750 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.025760 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:46Z","lastTransitionTime":"2026-01-27T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.090019 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:40:41.802751975 +0000 UTC Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.097574 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.097619 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:46 crc kubenswrapper[4872]: E0127 06:54:46.097768 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.097574 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:46 crc kubenswrapper[4872]: E0127 06:54:46.098063 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:46 crc kubenswrapper[4872]: E0127 06:54:46.098284 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.099141 4872 scope.go:117] "RemoveContainer" containerID="de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798" Jan 27 06:54:46 crc kubenswrapper[4872]: E0127 06:54:46.099361 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.130491 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.130550 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.130584 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.130606 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.130621 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:46Z","lastTransitionTime":"2026-01-27T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.232368 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.232408 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.232417 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.232431 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.232441 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:46Z","lastTransitionTime":"2026-01-27T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.334262 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.334309 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.334322 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.334338 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.334350 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:46Z","lastTransitionTime":"2026-01-27T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.437000 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.437035 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.437048 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.437064 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.437077 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:46Z","lastTransitionTime":"2026-01-27T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.538929 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.538982 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.538991 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.539006 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.539015 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:46Z","lastTransitionTime":"2026-01-27T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.642164 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.642194 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.642207 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.642222 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.642231 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:46Z","lastTransitionTime":"2026-01-27T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.745258 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.745334 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.745352 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.745384 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.745402 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:46Z","lastTransitionTime":"2026-01-27T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.848636 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.848684 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.848695 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.848708 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.848717 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:46Z","lastTransitionTime":"2026-01-27T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.951660 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.951704 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.951717 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.951733 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:46 crc kubenswrapper[4872]: I0127 06:54:46.951744 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:46Z","lastTransitionTime":"2026-01-27T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.053394 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.053435 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.053447 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.053459 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.053468 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:47Z","lastTransitionTime":"2026-01-27T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.090760 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 16:56:26.389567938 +0000 UTC Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.098072 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:47 crc kubenswrapper[4872]: E0127 06:54:47.098190 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.155282 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.155314 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.155322 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.155336 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.155344 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:47Z","lastTransitionTime":"2026-01-27T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.258685 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.258732 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.258743 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.258760 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.258774 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:47Z","lastTransitionTime":"2026-01-27T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.360747 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.360779 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.360787 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.360801 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.360809 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:47Z","lastTransitionTime":"2026-01-27T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.463859 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.463898 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.463910 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.463925 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.463936 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:47Z","lastTransitionTime":"2026-01-27T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.566395 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.566437 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.566446 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.566464 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.566497 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:47Z","lastTransitionTime":"2026-01-27T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.668829 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.668875 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.668883 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.668897 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.668907 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:47Z","lastTransitionTime":"2026-01-27T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.771013 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.771077 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.771094 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.771118 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.771137 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:47Z","lastTransitionTime":"2026-01-27T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.873866 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.873913 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.873922 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.873937 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.873946 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:47Z","lastTransitionTime":"2026-01-27T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.976238 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.976273 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.976284 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.976300 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:47 crc kubenswrapper[4872]: I0127 06:54:47.976310 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:47Z","lastTransitionTime":"2026-01-27T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.079076 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.079109 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.079118 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.079130 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.079139 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:48Z","lastTransitionTime":"2026-01-27T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.091016 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 16:37:03.800298896 +0000 UTC Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.097308 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:48 crc kubenswrapper[4872]: E0127 06:54:48.097436 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.097450 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:48 crc kubenswrapper[4872]: E0127 06:54:48.097529 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.097308 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:48 crc kubenswrapper[4872]: E0127 06:54:48.097596 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.181404 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.181439 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.181451 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.181467 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.181479 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:48Z","lastTransitionTime":"2026-01-27T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.285605 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.285636 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.285647 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.285659 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.285667 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:48Z","lastTransitionTime":"2026-01-27T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.388881 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.388918 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.388929 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.388946 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.388957 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:48Z","lastTransitionTime":"2026-01-27T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.491066 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.491103 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.491114 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.491130 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.491144 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:48Z","lastTransitionTime":"2026-01-27T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.593504 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.593540 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.593551 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.593562 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.593571 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:48Z","lastTransitionTime":"2026-01-27T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.696199 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.696246 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.696271 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.696289 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.696299 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:48Z","lastTransitionTime":"2026-01-27T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.798692 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.798734 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.798742 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.798756 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.798764 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:48Z","lastTransitionTime":"2026-01-27T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.900898 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.900935 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.900945 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.900958 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:48 crc kubenswrapper[4872]: I0127 06:54:48.900967 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:48Z","lastTransitionTime":"2026-01-27T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.004298 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.004391 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.004410 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.004435 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.004452 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:49Z","lastTransitionTime":"2026-01-27T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.091766 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:50:57.253366661 +0000 UTC Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.098220 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:49 crc kubenswrapper[4872]: E0127 06:54:49.098416 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.108171 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.108349 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.108371 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.108393 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.108411 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:49Z","lastTransitionTime":"2026-01-27T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.210316 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.210397 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.210414 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.210763 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.210804 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:49Z","lastTransitionTime":"2026-01-27T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.313498 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.313547 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.313560 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.313580 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.313592 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:49Z","lastTransitionTime":"2026-01-27T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.416373 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.416406 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.416418 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.416432 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.416441 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:49Z","lastTransitionTime":"2026-01-27T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.519930 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.519985 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.519997 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.520017 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.520030 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:49Z","lastTransitionTime":"2026-01-27T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.622943 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.622986 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.623001 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.623019 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.623032 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:49Z","lastTransitionTime":"2026-01-27T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.725241 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.725276 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.725286 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.725300 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.725312 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:49Z","lastTransitionTime":"2026-01-27T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.827820 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.827880 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.827889 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.827904 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.827915 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:49Z","lastTransitionTime":"2026-01-27T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.929630 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.929669 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.929681 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.929697 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:49 crc kubenswrapper[4872]: I0127 06:54:49.929708 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:49Z","lastTransitionTime":"2026-01-27T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.032626 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.032657 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.032666 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.032679 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.032687 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:50Z","lastTransitionTime":"2026-01-27T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.092598 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 08:40:42.704408952 +0000 UTC Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.098640 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:50 crc kubenswrapper[4872]: E0127 06:54:50.099257 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.099890 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:50 crc kubenswrapper[4872]: E0127 06:54:50.099939 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.100172 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:50 crc kubenswrapper[4872]: E0127 06:54:50.100220 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.135959 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.136009 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.136059 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.136085 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.136098 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:50Z","lastTransitionTime":"2026-01-27T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.239030 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.239082 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.239099 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.239124 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.239140 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:50Z","lastTransitionTime":"2026-01-27T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.340997 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.341024 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.341032 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.341044 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.341064 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:50Z","lastTransitionTime":"2026-01-27T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.443470 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.443497 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.443505 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.443518 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.443527 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:50Z","lastTransitionTime":"2026-01-27T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.547079 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.547131 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.547156 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.547169 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.547177 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:50Z","lastTransitionTime":"2026-01-27T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.577911 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs\") pod \"network-metrics-daemon-nstjz\" (UID: \"f22e033f-46c7-4d57-a333-e1eee5cd3091\") " pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:50 crc kubenswrapper[4872]: E0127 06:54:50.578071 4872 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:54:50 crc kubenswrapper[4872]: E0127 06:54:50.578143 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs podName:f22e033f-46c7-4d57-a333-e1eee5cd3091 nodeName:}" failed. No retries permitted until 2026-01-27 06:55:22.57812577 +0000 UTC m=+99.105600966 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs") pod "network-metrics-daemon-nstjz" (UID: "f22e033f-46c7-4d57-a333-e1eee5cd3091") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.649214 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.649253 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.649264 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.649276 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.649284 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:50Z","lastTransitionTime":"2026-01-27T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.751404 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.751443 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.751473 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.751493 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.751504 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:50Z","lastTransitionTime":"2026-01-27T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.827821 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.827908 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.827917 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.827931 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.827941 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:50Z","lastTransitionTime":"2026-01-27T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:50 crc kubenswrapper[4872]: E0127 06:54:50.840583 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.843733 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.843775 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.843786 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.843801 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.843809 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:50Z","lastTransitionTime":"2026-01-27T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:50 crc kubenswrapper[4872]: E0127 06:54:50.856519 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.859701 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.859719 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.859726 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.859738 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.859746 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:50Z","lastTransitionTime":"2026-01-27T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:50 crc kubenswrapper[4872]: E0127 06:54:50.870862 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.873746 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.873768 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.873776 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.873787 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.873795 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:50Z","lastTransitionTime":"2026-01-27T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:50 crc kubenswrapper[4872]: E0127 06:54:50.883691 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.886256 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.886282 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.886290 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.886302 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.886311 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:50Z","lastTransitionTime":"2026-01-27T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:50 crc kubenswrapper[4872]: E0127 06:54:50.897676 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:50 crc kubenswrapper[4872]: E0127 06:54:50.897783 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.898861 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.898879 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.898887 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.898898 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:50 crc kubenswrapper[4872]: I0127 06:54:50.898905 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:50Z","lastTransitionTime":"2026-01-27T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.001271 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.001306 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.001317 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.001331 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.001343 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:51Z","lastTransitionTime":"2026-01-27T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.093293 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:55:25.481059848 +0000 UTC Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.097629 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:51 crc kubenswrapper[4872]: E0127 06:54:51.097762 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.103453 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.103668 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.103792 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.103922 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.104007 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:51Z","lastTransitionTime":"2026-01-27T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.206527 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.206557 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.206566 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.206579 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.206589 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:51Z","lastTransitionTime":"2026-01-27T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.309221 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.309261 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.309279 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.309299 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.309312 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:51Z","lastTransitionTime":"2026-01-27T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.412870 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.413911 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.414002 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.414090 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.414149 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:51Z","lastTransitionTime":"2026-01-27T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.516100 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.516156 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.516186 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.516218 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.516258 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:51Z","lastTransitionTime":"2026-01-27T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.618428 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.618509 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.618518 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.618530 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.618539 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:51Z","lastTransitionTime":"2026-01-27T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.720956 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.720988 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.720997 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.721010 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.721020 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:51Z","lastTransitionTime":"2026-01-27T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.823225 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.823271 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.823282 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.823296 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.823307 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:51Z","lastTransitionTime":"2026-01-27T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.925757 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.926027 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.926093 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.926163 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:51 crc kubenswrapper[4872]: I0127 06:54:51.926229 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:51Z","lastTransitionTime":"2026-01-27T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.028686 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.028723 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.028733 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.028746 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.028754 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:52Z","lastTransitionTime":"2026-01-27T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.094450 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 19:10:33.594502347 +0000 UTC Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.097699 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.097789 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:52 crc kubenswrapper[4872]: E0127 06:54:52.097905 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.097939 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:52 crc kubenswrapper[4872]: E0127 06:54:52.098032 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:52 crc kubenswrapper[4872]: E0127 06:54:52.098118 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.130432 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.130467 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.130476 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.130492 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.130503 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:52Z","lastTransitionTime":"2026-01-27T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.232251 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.232293 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.232305 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.232323 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.232335 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:52Z","lastTransitionTime":"2026-01-27T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.334125 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.334165 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.334174 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.334188 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.334197 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:52Z","lastTransitionTime":"2026-01-27T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.436600 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.436647 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.436659 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.436677 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.436689 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:52Z","lastTransitionTime":"2026-01-27T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.538622 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.538663 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.538671 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.538688 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.538698 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:52Z","lastTransitionTime":"2026-01-27T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.640611 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.640659 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.640690 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.640709 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.640720 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:52Z","lastTransitionTime":"2026-01-27T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.742716 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.742902 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.742921 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.742934 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.742943 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:52Z","lastTransitionTime":"2026-01-27T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.845268 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.845295 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.845303 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.845315 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.845323 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:52Z","lastTransitionTime":"2026-01-27T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.947888 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.947939 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.947953 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.947968 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:52 crc kubenswrapper[4872]: I0127 06:54:52.947979 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:52Z","lastTransitionTime":"2026-01-27T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.050296 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.050324 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.050332 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.050345 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.050354 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:53Z","lastTransitionTime":"2026-01-27T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.095898 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:17:33.115663674 +0000 UTC Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.097216 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:53 crc kubenswrapper[4872]: E0127 06:54:53.097336 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.152635 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.152660 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.152668 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.152681 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.152691 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:53Z","lastTransitionTime":"2026-01-27T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.255337 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.255376 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.255385 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.255398 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.255408 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:53Z","lastTransitionTime":"2026-01-27T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.357518 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.357553 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.357562 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.357575 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.357584 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:53Z","lastTransitionTime":"2026-01-27T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.460175 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.460211 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.460222 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.460259 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.460269 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:53Z","lastTransitionTime":"2026-01-27T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.562527 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.562563 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.562572 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.562585 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.562595 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:53Z","lastTransitionTime":"2026-01-27T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.577326 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nvjgr_8575a338-fc73-4413-ab05-0fdfdd6bdf2d/kube-multus/0.log" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.577398 4872 generic.go:334] "Generic (PLEG): container finished" podID="8575a338-fc73-4413-ab05-0fdfdd6bdf2d" containerID="573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28" exitCode=1 Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.578283 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nvjgr" event={"ID":"8575a338-fc73-4413-ab05-0fdfdd6bdf2d","Type":"ContainerDied","Data":"573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28"} Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.578820 4872 scope.go:117] "RemoveContainer" containerID="573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.590070 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84772df7-3165-4527-b995-3d047610efe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f7a74a15b1cbc8629163aad3dccab4a717289d1f60a110fc7eb138b270e3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bacf5c751024ce9db129ca3707df22d9f73e42434168cdda4e48145e96bcdd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc80002d1a062927c4a493308999190edc5cadd30fd7fc5af2f7ed7920c3ede9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.602616 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.613374 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.622461 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.634334 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.645401 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.658163 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:53Z\\\",\\\"message\\\":\\\"2026-01-27T06:54:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756\\\\n2026-01-27T06:54:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756 to /host/opt/cni/bin/\\\\n2026-01-27T06:54:08Z [verbose] multus-daemon started\\\\n2026-01-27T06:54:08Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:54:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.667680 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.667701 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.667710 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.667722 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.667731 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:53Z","lastTransitionTime":"2026-01-27T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.676000 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:31Z\\\",\\\"message\\\":\\\"8] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 06:54:31.792296 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}\\\\nI0127 06:54:31.795052 6402 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 4.135764ms\\\\nI0127 06:54:31.795087 6402 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 06:54:31.795086 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0127 06:54:31.795136 6402 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.018065ms\\\\nF0127 06:54:31.795151 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.686123 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.696461 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.704916 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.716285 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.729278 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.739557 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.749692 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.759372 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.769487 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.769509 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.769518 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.769529 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.769553 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:53Z","lastTransitionTime":"2026-01-27T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.772629 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.871478 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.871520 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.871536 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.871554 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.871572 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:53Z","lastTransitionTime":"2026-01-27T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.974666 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.974694 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.974705 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.974720 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:53 crc kubenswrapper[4872]: I0127 06:54:53.974732 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:53Z","lastTransitionTime":"2026-01-27T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.077343 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.077373 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.077383 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.077399 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.077409 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:54Z","lastTransitionTime":"2026-01-27T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.096862 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:47:35.402424241 +0000 UTC Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.097080 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.097132 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.097185 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:54 crc kubenswrapper[4872]: E0127 06:54:54.097502 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:54 crc kubenswrapper[4872]: E0127 06:54:54.097585 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:54 crc kubenswrapper[4872]: E0127 06:54:54.097643 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.110911 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.113046 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.121750 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.132883 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.143968 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.157969 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.171477 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.179780 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.179802 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.179810 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.179824 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.179832 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:54Z","lastTransitionTime":"2026-01-27T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.186985 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.205707 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.220377 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84772df7-3165-4527-b995-3d047610efe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f7a74a15b1cbc8629163aad3dccab4a717289d1f60a110fc7eb138b270e3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bacf5c751024ce9db129ca3707df22d9f73e42434168cdda4e48145e96bcdd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc80002d1a062927c4a493308999190edc5cadd30fd7fc5af2f7ed7920c3ede9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.232375 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.242483 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.251005 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.260232 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.272396 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:53Z\\\",\\\"message\\\":\\\"2026-01-27T06:54:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756\\\\n2026-01-27T06:54:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756 to /host/opt/cni/bin/\\\\n2026-01-27T06:54:08Z [verbose] multus-daemon started\\\\n2026-01-27T06:54:08Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:54:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.281862 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.281894 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.281907 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.281921 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.281930 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:54Z","lastTransitionTime":"2026-01-27T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.290734 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:31Z\\\",\\\"message\\\":\\\"8] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 06:54:31.792296 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}\\\\nI0127 06:54:31.795052 6402 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 4.135764ms\\\\nI0127 06:54:31.795087 6402 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 06:54:31.795086 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0127 06:54:31.795136 6402 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.018065ms\\\\nF0127 06:54:31.795151 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.299007 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.311817 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.384640 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.384695 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.384707 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.384724 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.385132 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:54Z","lastTransitionTime":"2026-01-27T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.486571 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.486607 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.486617 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.486633 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.486644 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:54Z","lastTransitionTime":"2026-01-27T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.582115 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nvjgr_8575a338-fc73-4413-ab05-0fdfdd6bdf2d/kube-multus/0.log" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.582222 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nvjgr" event={"ID":"8575a338-fc73-4413-ab05-0fdfdd6bdf2d","Type":"ContainerStarted","Data":"07b8c08bb7d393cd9ee35fe3ab28fd559dd83a249f46189803c4b3fc5a4aff20"} Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.588453 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.588482 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.588490 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.588503 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.588513 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:54Z","lastTransitionTime":"2026-01-27T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.593502 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0dd76d-6c6f-4cf0-bd27-b55c9a616e37\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://960b06167d64b9abead49c9a5166906a94b96d16d2fc70ea4e52b5b0806aa9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a2b7935217b139e04adab18e5710ef18d8e1c41200b39d489883921d83d3923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a2b7935217b139e04adab18e5710ef18d8e1c41200b39d489883921d83d3923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.604693 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.613887 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.627067 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.639144 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.650783 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.663209 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.674458 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.686338 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.690029 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.690054 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.690064 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.690077 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.690084 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:54Z","lastTransitionTime":"2026-01-27T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.698422 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84772df7-3165-4527-b995-3d047610efe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f7a74a15b1cbc8629163aad3dccab4a717289d1f60a110fc7eb138b270e3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bacf5c751024ce9db129ca3707df22d9f73e42434168cdda4e48145e96bcdd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc80002d1a062927c4a493308999190edc5cadd30fd7fc5af2f7ed7920c3ede9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.709966 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.722975 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.733405 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.744074 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.767786 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.785856 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b8c08bb7d393cd9ee35fe3ab28fd559dd83a249f46189803c4b3fc5a4aff20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:53Z\\\",\\\"message\\\":\\\"2026-01-27T06:54:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756\\\\n2026-01-27T06:54:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756 to /host/opt/cni/bin/\\\\n2026-01-27T06:54:08Z [verbose] multus-daemon started\\\\n2026-01-27T06:54:08Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:54:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.791961 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.791997 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.792006 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.792022 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.792032 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:54Z","lastTransitionTime":"2026-01-27T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.804189 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:31Z\\\",\\\"message\\\":\\\"8] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 06:54:31.792296 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}\\\\nI0127 06:54:31.795052 6402 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 4.135764ms\\\\nI0127 06:54:31.795087 6402 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 06:54:31.795086 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0127 06:54:31.795136 6402 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.018065ms\\\\nF0127 06:54:31.795151 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.813964 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.894378 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.894417 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.894426 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.894441 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.894451 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:54Z","lastTransitionTime":"2026-01-27T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.996783 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.996811 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.996822 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.996834 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:54 crc kubenswrapper[4872]: I0127 06:54:54.996867 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:54Z","lastTransitionTime":"2026-01-27T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.097312 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 05:33:08.722953588 +0000 UTC Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.097488 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:55 crc kubenswrapper[4872]: E0127 06:54:55.097588 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.098764 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.098790 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.098798 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.098808 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.098818 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:55Z","lastTransitionTime":"2026-01-27T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.201011 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.201050 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.201059 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.201076 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.201088 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:55Z","lastTransitionTime":"2026-01-27T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.303692 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.303755 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.303770 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.303794 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.303810 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:55Z","lastTransitionTime":"2026-01-27T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.406105 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.406148 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.406160 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.406176 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.406188 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:55Z","lastTransitionTime":"2026-01-27T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.508305 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.508362 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.508373 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.508389 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.508400 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:55Z","lastTransitionTime":"2026-01-27T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.610206 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.610241 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.610251 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.610267 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.610277 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:55Z","lastTransitionTime":"2026-01-27T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.712889 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.712921 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.712931 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.712947 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.712957 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:55Z","lastTransitionTime":"2026-01-27T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.814962 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.814999 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.815008 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.815024 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.815034 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:55Z","lastTransitionTime":"2026-01-27T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.916967 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.917007 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.917019 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.917037 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:55 crc kubenswrapper[4872]: I0127 06:54:55.917050 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:55Z","lastTransitionTime":"2026-01-27T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.019835 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.019916 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.019929 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.019945 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.019955 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:56Z","lastTransitionTime":"2026-01-27T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.097524 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.097594 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.097532 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:56 crc kubenswrapper[4872]: E0127 06:54:56.097668 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:56 crc kubenswrapper[4872]: E0127 06:54:56.097763 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:56 crc kubenswrapper[4872]: E0127 06:54:56.097876 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.097515 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 11:04:31.438246069 +0000 UTC Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.122693 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.122956 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.123030 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.123102 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.123165 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:56Z","lastTransitionTime":"2026-01-27T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.225033 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.225064 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.225072 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.225086 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.225095 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:56Z","lastTransitionTime":"2026-01-27T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.327692 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.327724 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.327732 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.327745 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.327755 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:56Z","lastTransitionTime":"2026-01-27T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.429341 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.429674 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.429746 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.429826 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.429920 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:56Z","lastTransitionTime":"2026-01-27T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.532259 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.532320 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.532333 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.532354 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.532365 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:56Z","lastTransitionTime":"2026-01-27T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.634762 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.634803 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.634813 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.634827 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.634853 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:56Z","lastTransitionTime":"2026-01-27T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.737113 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.737152 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.737165 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.737178 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.737189 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:56Z","lastTransitionTime":"2026-01-27T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.839377 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.839440 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.839451 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.839465 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.839474 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:56Z","lastTransitionTime":"2026-01-27T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.942207 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.942252 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.942266 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.942283 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:56 crc kubenswrapper[4872]: I0127 06:54:56.942296 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:56Z","lastTransitionTime":"2026-01-27T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.044559 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.044633 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.044643 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.044655 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.044664 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:57Z","lastTransitionTime":"2026-01-27T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.098049 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:57 crc kubenswrapper[4872]: E0127 06:54:57.098668 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.098045 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 23:11:51.477653357 +0000 UTC Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.147131 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.147163 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.147171 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.147184 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.147193 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:57Z","lastTransitionTime":"2026-01-27T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.250047 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.250087 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.250108 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.250124 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.250133 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:57Z","lastTransitionTime":"2026-01-27T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.352145 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.352187 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.352198 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.352215 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.352226 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:57Z","lastTransitionTime":"2026-01-27T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.454404 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.454447 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.454457 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.454475 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.454486 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:57Z","lastTransitionTime":"2026-01-27T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.556139 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.556173 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.556184 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.556199 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.556211 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:57Z","lastTransitionTime":"2026-01-27T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.658570 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.658605 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.658616 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.658630 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.658638 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:57Z","lastTransitionTime":"2026-01-27T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.761170 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.761436 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.761510 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.761584 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.761654 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:57Z","lastTransitionTime":"2026-01-27T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.863720 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.863760 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.863771 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.863787 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.863799 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:57Z","lastTransitionTime":"2026-01-27T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.966185 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.966222 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.966259 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.966282 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:57 crc kubenswrapper[4872]: I0127 06:54:57.966290 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:57Z","lastTransitionTime":"2026-01-27T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.068140 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.068187 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.068200 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.068217 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.068227 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:58Z","lastTransitionTime":"2026-01-27T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.097981 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:54:58 crc kubenswrapper[4872]: E0127 06:54:58.098301 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.098068 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:54:58 crc kubenswrapper[4872]: E0127 06:54:58.099007 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.098887 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:48:16.661822075 +0000 UTC Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.098034 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:54:58 crc kubenswrapper[4872]: E0127 06:54:58.099351 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.170297 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.170331 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.170343 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.170360 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.170372 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:58Z","lastTransitionTime":"2026-01-27T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.272479 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.272518 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.272530 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.272547 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.272559 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:58Z","lastTransitionTime":"2026-01-27T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.374613 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.374678 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.374688 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.374701 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.374711 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:58Z","lastTransitionTime":"2026-01-27T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.476615 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.476647 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.476656 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.476672 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.476682 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:58Z","lastTransitionTime":"2026-01-27T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.578732 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.579118 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.579221 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.579329 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.579412 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:58Z","lastTransitionTime":"2026-01-27T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.681945 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.682200 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.682265 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.682328 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.682384 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:58Z","lastTransitionTime":"2026-01-27T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.784400 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.784434 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.784442 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.784458 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.784466 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:58Z","lastTransitionTime":"2026-01-27T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.887619 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.887698 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.887718 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.887747 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.887767 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:58Z","lastTransitionTime":"2026-01-27T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.990452 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.990723 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.990977 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.991194 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:58 crc kubenswrapper[4872]: I0127 06:54:58.991409 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:58Z","lastTransitionTime":"2026-01-27T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.094085 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.094364 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.094478 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.094569 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.094655 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:59Z","lastTransitionTime":"2026-01-27T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.097414 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:54:59 crc kubenswrapper[4872]: E0127 06:54:59.097616 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.099511 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:16:48.348359668 +0000 UTC Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.196800 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.196882 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.196899 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.196915 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.196930 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:59Z","lastTransitionTime":"2026-01-27T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.299078 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.299340 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.299444 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.299569 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.299759 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:59Z","lastTransitionTime":"2026-01-27T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.401863 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.402107 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.402185 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.402273 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.402378 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:59Z","lastTransitionTime":"2026-01-27T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.505562 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.505918 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.506033 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.506139 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.506217 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:59Z","lastTransitionTime":"2026-01-27T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.608126 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.608160 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.608170 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.608183 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.608192 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:59Z","lastTransitionTime":"2026-01-27T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.710606 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.710885 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.710966 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.711041 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.711114 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:59Z","lastTransitionTime":"2026-01-27T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.814097 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.814373 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.814499 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.814587 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.814671 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:59Z","lastTransitionTime":"2026-01-27T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.917895 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.917961 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.917985 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.918008 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:54:59 crc kubenswrapper[4872]: I0127 06:54:59.918025 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:54:59Z","lastTransitionTime":"2026-01-27T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.020413 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.020442 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.020456 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.020476 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.020486 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:00Z","lastTransitionTime":"2026-01-27T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.097870 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.097936 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.098325 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:00 crc kubenswrapper[4872]: E0127 06:55:00.098424 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:00 crc kubenswrapper[4872]: E0127 06:55:00.098560 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.098591 4872 scope.go:117] "RemoveContainer" containerID="de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798" Jan 27 06:55:00 crc kubenswrapper[4872]: E0127 06:55:00.098617 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.099619 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:19:42.354357582 +0000 UTC Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.123600 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.123721 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.123733 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.123746 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.123755 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:00Z","lastTransitionTime":"2026-01-27T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.226160 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.226539 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.226618 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.226698 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.226766 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:00Z","lastTransitionTime":"2026-01-27T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.328321 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.328369 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.328380 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.328393 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.328402 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:00Z","lastTransitionTime":"2026-01-27T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.430513 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.430543 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.430553 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.430567 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.430576 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:00Z","lastTransitionTime":"2026-01-27T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.532379 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.532442 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.532454 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.532474 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.532488 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:00Z","lastTransitionTime":"2026-01-27T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.601835 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovnkube-controller/2.log" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.605041 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerStarted","Data":"d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c"} Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.605550 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.623492 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.635169 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.635213 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.635225 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.635244 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.635256 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:00Z","lastTransitionTime":"2026-01-27T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.641395 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.655752 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84772df7-3165-4527-b995-3d047610efe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f7a74a15b1cbc8629163aad3dccab4a717289d1f60a110fc7eb138b270e3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bacf5c751024ce9db129ca3707df22d9f73e42434168cdda4e48145e96bcdd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc80002d1a062927c4a493308999190edc5cadd30fd7fc5af2f7ed7920c3ede9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.681986 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.705218 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:31Z\\\",\\\"message\\\":\\\"8] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 06:54:31.792296 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}\\\\nI0127 06:54:31.795052 6402 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 4.135764ms\\\\nI0127 06:54:31.795087 6402 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 06:54:31.795086 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0127 06:54:31.795136 6402 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.018065ms\\\\nF0127 06:54:31.795151 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.726460 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.737014 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.737041 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.737049 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.737061 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.737070 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:00Z","lastTransitionTime":"2026-01-27T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.741568 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.751972 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.766653 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b8c08bb7d393cd9ee35fe3ab28fd559dd83a249f46189803c4b3fc5a4aff20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:53Z\\\",\\\"message\\\":\\\"2026-01-27T06:54:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756\\\\n2026-01-27T06:54:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756 to /host/opt/cni/bin/\\\\n2026-01-27T06:54:08Z [verbose] multus-daemon started\\\\n2026-01-27T06:54:08Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:54:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.778531 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0dd76d-6c6f-4cf0-bd27-b55c9a616e37\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://960b06167d64b9abead49c9a5166906a94b96d16d2fc70ea4e52b5b0806aa9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a2b7935217b139e04adab18e5710ef18d8e1c41200b39d489883921d83d3923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a2b7935217b139e04adab18e5710ef18d8e1c41200b39d489883921d83d3923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.796366 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.807592 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.819611 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.831496 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.839674 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.839708 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.839717 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.839731 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.839741 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:00Z","lastTransitionTime":"2026-01-27T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.850178 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.866765 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.881118 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.894766 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.941878 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.941914 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.941922 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.941937 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:00 crc kubenswrapper[4872]: I0127 06:55:00.941946 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:00Z","lastTransitionTime":"2026-01-27T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.045018 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.045057 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.045069 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.045086 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.045098 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:01Z","lastTransitionTime":"2026-01-27T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.050714 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.050770 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.050779 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.050794 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.050804 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:01Z","lastTransitionTime":"2026-01-27T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:01 crc kubenswrapper[4872]: E0127 06:55:01.068100 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.071117 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.071147 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.071157 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.071172 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.071182 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:01Z","lastTransitionTime":"2026-01-27T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:01 crc kubenswrapper[4872]: E0127 06:55:01.082277 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.085397 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.085423 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.085431 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.085443 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.085452 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:01Z","lastTransitionTime":"2026-01-27T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:01 crc kubenswrapper[4872]: E0127 06:55:01.095826 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.097477 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:01 crc kubenswrapper[4872]: E0127 06:55:01.097580 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.098733 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.098753 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.098760 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.098771 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.098779 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:01Z","lastTransitionTime":"2026-01-27T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.100008 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:50:19.29830307 +0000 UTC Jan 27 06:55:01 crc kubenswrapper[4872]: E0127 06:55:01.111212 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.114812 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.114860 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.114871 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.114885 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.114895 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:01Z","lastTransitionTime":"2026-01-27T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:01 crc kubenswrapper[4872]: E0127 06:55:01.128440 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: E0127 06:55:01.128581 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.147835 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.147980 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.148007 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.148043 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.148067 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:01Z","lastTransitionTime":"2026-01-27T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.250972 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.251050 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.251070 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.251100 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.251119 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:01Z","lastTransitionTime":"2026-01-27T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.353067 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.353105 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.353115 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.353131 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.353142 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:01Z","lastTransitionTime":"2026-01-27T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.455777 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.455815 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.455827 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.455858 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.455872 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:01Z","lastTransitionTime":"2026-01-27T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.557909 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.557961 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.557972 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.557986 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.557995 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:01Z","lastTransitionTime":"2026-01-27T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.609720 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovnkube-controller/3.log" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.610479 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovnkube-controller/2.log" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.613767 4872 generic.go:334] "Generic (PLEG): container finished" podID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerID="d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c" exitCode=1 Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.613815 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerDied","Data":"d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c"} Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.613892 4872 scope.go:117] "RemoveContainer" containerID="de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.618364 4872 scope.go:117] "RemoveContainer" containerID="d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c" Jan 27 06:55:01 crc kubenswrapper[4872]: E0127 06:55:01.618587 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.627304 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.637733 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.646933 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0dd76d-6c6f-4cf0-bd27-b55c9a616e37\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://960b06167d64b9abead49c9a5166906a94b96d16d2fc70ea4e52b5b0806aa9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a2b7935217b139e04adab18e5710ef18d8e1c41200b39d489883921d83d3923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a2b7935217b139e04adab18e5710ef18d8e1c41200b39d489883921d83d3923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.659811 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.659920 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.659894 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.659937 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.660073 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.660084 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:01Z","lastTransitionTime":"2026-01-27T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.677461 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.688533 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.700250 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.712753 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.724518 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.733743 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84772df7-3165-4527-b995-3d047610efe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f7a74a15b1cbc8629163aad3dccab4a717289d1f60a110fc7eb138b270e3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bacf5c751024ce9db129ca3707df22d9f73e42434168cdda4e48145e96bcdd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc80002d1a062927c4a493308999190edc5cadd30fd7fc5af2f7ed7920c3ede9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.744502 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.754961 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.762092 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.762117 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.762125 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.762137 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.762146 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:01Z","lastTransitionTime":"2026-01-27T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.764003 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.774512 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.791459 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b8c08bb7d393cd9ee35fe3ab28fd559dd83a249f46189803c4b3fc5a4aff20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:53Z\\\",\\\"message\\\":\\\"2026-01-27T06:54:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756\\\\n2026-01-27T06:54:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756 to /host/opt/cni/bin/\\\\n2026-01-27T06:54:08Z [verbose] multus-daemon started\\\\n2026-01-27T06:54:08Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:54:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.807838 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6a1445fdbfed6076f23561cc7fbdbdb1932f91b27797c3216e529099d22798\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:31Z\\\",\\\"message\\\":\\\"8] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 06:54:31.792296 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator\\\\\\\"}\\\\nI0127 06:54:31.795052 6402 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 4.135764ms\\\\nI0127 06:54:31.795087 6402 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 06:54:31.795086 6402 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0127 06:54:31.795136 6402 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.018065ms\\\\nF0127 06:54:31.795151 6402 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"/apis/informers/externalversions/factory.go:140\\\\nI0127 06:55:01.026632 6776 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:55:01.027399 6776 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0127 06:55:01.034479 6776 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:55:01.034554 6776 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 06:55:01.034580 6776 factory.go:656] Stopping watch factory\\\\nI0127 06:55:01.034599 6776 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:55:01.034611 6776 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:55:01.047465 6776 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0127 06:55:01.047488 6776 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0127 06:55:01.047560 6776 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:55:01.047587 6776 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:55:01.047661 6776 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.817230 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.829501 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.864130 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.864161 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.864172 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.864259 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.864271 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:01Z","lastTransitionTime":"2026-01-27T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.966684 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.966753 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.966777 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.966806 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:01 crc kubenswrapper[4872]: I0127 06:55:01.966829 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:01Z","lastTransitionTime":"2026-01-27T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.070399 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.070462 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.070477 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.070497 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.070508 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:02Z","lastTransitionTime":"2026-01-27T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.097142 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.097170 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:02 crc kubenswrapper[4872]: E0127 06:55:02.097284 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.097316 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:02 crc kubenswrapper[4872]: E0127 06:55:02.097397 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:02 crc kubenswrapper[4872]: E0127 06:55:02.097440 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.100121 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 03:13:06.175987591 +0000 UTC Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.173146 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.173183 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.173194 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.173209 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.173220 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:02Z","lastTransitionTime":"2026-01-27T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.275704 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.275753 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.275772 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.275790 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.275802 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:02Z","lastTransitionTime":"2026-01-27T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.378173 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.378211 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.378221 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.378235 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.378245 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:02Z","lastTransitionTime":"2026-01-27T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.480440 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.480508 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.480522 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.480537 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.480547 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:02Z","lastTransitionTime":"2026-01-27T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.583331 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.583371 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.583381 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.583396 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.583408 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:02Z","lastTransitionTime":"2026-01-27T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.618670 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovnkube-controller/3.log" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.621351 4872 scope.go:117] "RemoveContainer" containerID="d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c" Jan 27 06:55:02 crc kubenswrapper[4872]: E0127 06:55:02.621488 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.634480 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.648154 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.660500 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.671347 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.682765 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.685609 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.685645 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.685659 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.685676 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.685688 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:02Z","lastTransitionTime":"2026-01-27T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.696246 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.705432 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84772df7-3165-4527-b995-3d047610efe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f7a74a15b1cbc8629163aad3dccab4a717289d1f60a110fc7eb138b270e3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bacf5c751024ce9db129ca3707df22d9f73e42434168cdda4e48145e96bcdd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc80002d1a062927c4a493308999190edc5cadd30fd7fc5af2f7ed7920c3ede9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.716136 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.726228 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.734314 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.744284 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.753575 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.763965 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b8c08bb7d393cd9ee35fe3ab28fd559dd83a249f46189803c4b3fc5a4aff20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:53Z\\\",\\\"message\\\":\\\"2026-01-27T06:54:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756\\\\n2026-01-27T06:54:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756 to /host/opt/cni/bin/\\\\n2026-01-27T06:54:08Z [verbose] multus-daemon started\\\\n2026-01-27T06:54:08Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:54:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.780582 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"/apis/informers/externalversions/factory.go:140\\\\nI0127 06:55:01.026632 6776 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:55:01.027399 6776 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0127 06:55:01.034479 6776 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:55:01.034554 6776 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 06:55:01.034580 6776 factory.go:656] Stopping watch factory\\\\nI0127 06:55:01.034599 6776 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:55:01.034611 6776 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:55:01.047465 6776 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0127 06:55:01.047488 6776 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0127 06:55:01.047560 6776 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:55:01.047587 6776 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:55:01.047661 6776 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:55:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.788321 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.788365 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.788378 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.788393 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.788406 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:02Z","lastTransitionTime":"2026-01-27T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.790559 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.799090 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0dd76d-6c6f-4cf0-bd27-b55c9a616e37\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://960b06167d64b9abead49c9a5166906a94b96d16d2fc70ea4e52b5b0806aa9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a2b7935217b139e04adab18e5710ef18d8e1c41200b39d489883921d83d3923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a2b7935217b139e04adab18e5710ef18d8e1c41200b39d489883921d83d3923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.808906 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.818604 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.890989 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.891032 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.891042 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.891059 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.891071 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:02Z","lastTransitionTime":"2026-01-27T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.993493 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.993533 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.993546 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.993564 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:02 crc kubenswrapper[4872]: I0127 06:55:02.993577 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:02Z","lastTransitionTime":"2026-01-27T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.095800 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.095837 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.095863 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.095875 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.095886 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:03Z","lastTransitionTime":"2026-01-27T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.097091 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:03 crc kubenswrapper[4872]: E0127 06:55:03.097229 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.101128 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:22:00.80821832 +0000 UTC Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.198773 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.198825 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.198837 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.198874 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.198885 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:03Z","lastTransitionTime":"2026-01-27T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.301478 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.301527 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.301539 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.301554 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.301564 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:03Z","lastTransitionTime":"2026-01-27T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.403891 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.403925 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.403934 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.403949 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.403962 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:03Z","lastTransitionTime":"2026-01-27T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.506889 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.506949 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.506967 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.506991 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.507007 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:03Z","lastTransitionTime":"2026-01-27T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.609162 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.609200 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.609209 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.609221 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.609230 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:03Z","lastTransitionTime":"2026-01-27T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.710984 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.711276 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.711409 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.711513 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.711636 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:03Z","lastTransitionTime":"2026-01-27T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.814364 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.814394 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.814404 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.814420 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.814429 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:03Z","lastTransitionTime":"2026-01-27T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.916652 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.916922 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.916939 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.916956 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:03 crc kubenswrapper[4872]: I0127 06:55:03.916969 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:03Z","lastTransitionTime":"2026-01-27T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.019088 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.019321 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.019427 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.019522 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.019602 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:04Z","lastTransitionTime":"2026-01-27T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.098005 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.098027 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:04 crc kubenswrapper[4872]: E0127 06:55:04.099186 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:04 crc kubenswrapper[4872]: E0127 06:55:04.099013 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.098089 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:04 crc kubenswrapper[4872]: E0127 06:55:04.099497 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.101761 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:49:29.840012189 +0000 UTC Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.109293 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.121315 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.122984 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.123192 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.123256 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.123329 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.123475 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:04Z","lastTransitionTime":"2026-01-27T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.134709 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.147161 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.159753 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.171491 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.182333 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.191783 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.201259 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84772df7-3165-4527-b995-3d047610efe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f7a74a15b1cbc8629163aad3dccab4a717289d1f60a110fc7eb138b270e3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bacf5c751024ce9db129ca3707df22d9f73e42434168cdda4e48145e96bcdd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc80002d1a062927c4a493308999190edc5cadd30fd7fc5af2f7ed7920c3ede9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.213457 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.224999 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.225024 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.225032 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.225046 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.225056 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:04Z","lastTransitionTime":"2026-01-27T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.230468 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"/apis/informers/externalversions/factory.go:140\\\\nI0127 06:55:01.026632 6776 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:55:01.027399 6776 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0127 06:55:01.034479 6776 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:55:01.034554 6776 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 06:55:01.034580 6776 factory.go:656] Stopping watch factory\\\\nI0127 06:55:01.034599 6776 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:55:01.034611 6776 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:55:01.047465 6776 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0127 06:55:01.047488 6776 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0127 06:55:01.047560 6776 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:55:01.047587 6776 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:55:01.047661 6776 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:55:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.240499 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.251479 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.260461 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.270099 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b8c08bb7d393cd9ee35fe3ab28fd559dd83a249f46189803c4b3fc5a4aff20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:53Z\\\",\\\"message\\\":\\\"2026-01-27T06:54:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756\\\\n2026-01-27T06:54:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756 to /host/opt/cni/bin/\\\\n2026-01-27T06:54:08Z [verbose] multus-daemon started\\\\n2026-01-27T06:54:08Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:54:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.278884 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0dd76d-6c6f-4cf0-bd27-b55c9a616e37\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://960b06167d64b9abead49c9a5166906a94b96d16d2fc70ea4e52b5b0806aa9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a2b7935217b139e04adab18e5710ef18d8e1c41200b39d489883921d83d3923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a2b7935217b139e04adab18e5710ef18d8e1c41200b39d489883921d83d3923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.289293 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.300189 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:04Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.327050 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.327086 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.327095 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.327109 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.327118 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:04Z","lastTransitionTime":"2026-01-27T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.429816 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.429894 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.429911 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.429928 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.429939 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:04Z","lastTransitionTime":"2026-01-27T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.532041 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.532084 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.532095 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.532112 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.532124 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:04Z","lastTransitionTime":"2026-01-27T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.634030 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.634067 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.634078 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.634094 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.634123 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:04Z","lastTransitionTime":"2026-01-27T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.736780 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.736814 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.736823 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.736836 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.736866 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:04Z","lastTransitionTime":"2026-01-27T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.861980 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.862022 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.862030 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.862045 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.862054 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:04Z","lastTransitionTime":"2026-01-27T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.964574 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.964855 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.964948 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.965029 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:04 crc kubenswrapper[4872]: I0127 06:55:04.965090 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:04Z","lastTransitionTime":"2026-01-27T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.067055 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.067325 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.067389 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.067470 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.067538 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:05Z","lastTransitionTime":"2026-01-27T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.098263 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:05 crc kubenswrapper[4872]: E0127 06:55:05.098972 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.102178 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 13:15:33.702350854 +0000 UTC Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.170151 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.170196 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.170210 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.170226 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.170237 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:05Z","lastTransitionTime":"2026-01-27T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.272720 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.273100 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.273172 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.273311 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.273401 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:05Z","lastTransitionTime":"2026-01-27T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.376496 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.376539 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.376550 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.376569 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.376605 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:05Z","lastTransitionTime":"2026-01-27T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.479316 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.479774 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.479971 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.480056 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.480128 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:05Z","lastTransitionTime":"2026-01-27T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.583611 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.583690 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.583710 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.583738 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.583758 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:05Z","lastTransitionTime":"2026-01-27T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.686017 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.686067 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.686079 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.686097 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.686109 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:05Z","lastTransitionTime":"2026-01-27T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.791325 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.791361 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.791372 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.791386 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.791399 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:05Z","lastTransitionTime":"2026-01-27T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.893711 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.893757 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.893768 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.893783 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.893792 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:05Z","lastTransitionTime":"2026-01-27T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.995570 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.995604 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.995613 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.995629 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:05 crc kubenswrapper[4872]: I0127 06:55:05.995639 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:05Z","lastTransitionTime":"2026-01-27T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.097093 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.097120 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:06 crc kubenswrapper[4872]: E0127 06:55:06.097258 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:06 crc kubenswrapper[4872]: E0127 06:55:06.097425 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.097522 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:06 crc kubenswrapper[4872]: E0127 06:55:06.097653 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.098040 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.098079 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.098092 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.098106 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.098137 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:06Z","lastTransitionTime":"2026-01-27T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.102295 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:30:49.067469982 +0000 UTC Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.200720 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.200776 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.200788 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.200807 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.200821 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:06Z","lastTransitionTime":"2026-01-27T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.303524 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.303570 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.303581 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.303596 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.303608 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:06Z","lastTransitionTime":"2026-01-27T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.406019 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.406049 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.406057 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.406072 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.406086 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:06Z","lastTransitionTime":"2026-01-27T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.509103 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.509142 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.509152 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.509176 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.509194 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:06Z","lastTransitionTime":"2026-01-27T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.617655 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.617699 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.617710 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.617725 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.617736 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:06Z","lastTransitionTime":"2026-01-27T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.720479 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.720515 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.720526 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.720542 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.720553 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:06Z","lastTransitionTime":"2026-01-27T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.822262 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.822333 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.822345 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.822361 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.822370 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:06Z","lastTransitionTime":"2026-01-27T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.924413 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.924454 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.924471 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.924484 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:06 crc kubenswrapper[4872]: I0127 06:55:06.924493 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:06Z","lastTransitionTime":"2026-01-27T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.026273 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.026313 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.026325 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.026340 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.026352 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:07Z","lastTransitionTime":"2026-01-27T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.097759 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:07 crc kubenswrapper[4872]: E0127 06:55:07.097911 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.102994 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:57:53.936450525 +0000 UTC Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.128911 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.128961 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.128970 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.128984 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.128996 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:07Z","lastTransitionTime":"2026-01-27T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.232098 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.232152 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.232168 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.232187 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.232206 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:07Z","lastTransitionTime":"2026-01-27T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.334741 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.334782 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.334793 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.334808 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.334818 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:07Z","lastTransitionTime":"2026-01-27T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.438553 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.438608 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.438619 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.438638 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.438652 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:07Z","lastTransitionTime":"2026-01-27T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.542654 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.542705 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.542717 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.542735 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.542747 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:07Z","lastTransitionTime":"2026-01-27T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.645876 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.645922 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.645934 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.645953 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.646010 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:07Z","lastTransitionTime":"2026-01-27T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.748873 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.748907 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.748916 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.748932 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.748943 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:07Z","lastTransitionTime":"2026-01-27T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.851191 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.851244 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.851255 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.851270 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.851280 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:07Z","lastTransitionTime":"2026-01-27T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.952739 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.952776 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.952788 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.952803 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.952815 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:07Z","lastTransitionTime":"2026-01-27T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:07 crc kubenswrapper[4872]: I0127 06:55:07.973373 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:07 crc kubenswrapper[4872]: E0127 06:55:07.973534 4872 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:55:07 crc kubenswrapper[4872]: E0127 06:55:07.973604 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:56:11.973585168 +0000 UTC m=+148.501060364 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.055066 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.055097 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.055107 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.055119 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.055128 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:08Z","lastTransitionTime":"2026-01-27T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.074633 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.074762 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:08 crc kubenswrapper[4872]: E0127 06:55:08.074786 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:12.074764443 +0000 UTC m=+148.602239639 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.074864 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:08 crc kubenswrapper[4872]: E0127 06:55:08.074886 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:55:08 crc kubenswrapper[4872]: E0127 06:55:08.074902 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:55:08 crc kubenswrapper[4872]: E0127 06:55:08.074912 4872 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.074921 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:08 crc kubenswrapper[4872]: E0127 06:55:08.074956 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:56:12.074944518 +0000 UTC m=+148.602419714 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:55:08 crc kubenswrapper[4872]: E0127 06:55:08.074973 4872 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:55:08 crc kubenswrapper[4872]: E0127 06:55:08.075001 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:56:12.074995189 +0000 UTC m=+148.602470385 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:55:08 crc kubenswrapper[4872]: E0127 06:55:08.075003 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:55:08 crc kubenswrapper[4872]: E0127 06:55:08.075020 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:55:08 crc kubenswrapper[4872]: E0127 06:55:08.075029 4872 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:55:08 crc kubenswrapper[4872]: E0127 06:55:08.075071 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:56:12.075062981 +0000 UTC m=+148.602538177 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.097346 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.097400 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.097482 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:08 crc kubenswrapper[4872]: E0127 06:55:08.097511 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:08 crc kubenswrapper[4872]: E0127 06:55:08.097612 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:08 crc kubenswrapper[4872]: E0127 06:55:08.097690 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.103173 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:47:26.238985788 +0000 UTC Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.156898 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.156939 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.156948 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.156961 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.156970 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:08Z","lastTransitionTime":"2026-01-27T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.259583 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.259629 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.259642 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.259656 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.259668 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:08Z","lastTransitionTime":"2026-01-27T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.361944 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.361984 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.361995 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.362013 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.362026 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:08Z","lastTransitionTime":"2026-01-27T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.464060 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.464095 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.464105 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.464117 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.464125 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:08Z","lastTransitionTime":"2026-01-27T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.565970 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.566009 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.566022 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.566038 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.566053 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:08Z","lastTransitionTime":"2026-01-27T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.667646 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.667677 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.667686 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.667697 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.667705 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:08Z","lastTransitionTime":"2026-01-27T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.769577 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.769632 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.769649 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.769673 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.769690 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:08Z","lastTransitionTime":"2026-01-27T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.872164 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.872227 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.872239 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.872256 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.872268 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:08Z","lastTransitionTime":"2026-01-27T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.974650 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.974717 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.974728 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.974742 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:08 crc kubenswrapper[4872]: I0127 06:55:08.974752 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:08Z","lastTransitionTime":"2026-01-27T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.077009 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.077066 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.077078 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.077093 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.077104 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:09Z","lastTransitionTime":"2026-01-27T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.097613 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:09 crc kubenswrapper[4872]: E0127 06:55:09.097754 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.103789 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 10:31:43.434172596 +0000 UTC Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.179644 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.179678 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.179687 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.179700 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.179708 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:09Z","lastTransitionTime":"2026-01-27T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.282077 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.282317 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.282326 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.282338 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.282349 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:09Z","lastTransitionTime":"2026-01-27T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.386205 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.386273 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.386286 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.386310 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.386325 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:09Z","lastTransitionTime":"2026-01-27T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.493238 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.493275 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.493284 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.493299 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.493308 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:09Z","lastTransitionTime":"2026-01-27T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.596443 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.596496 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.596508 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.596526 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.596539 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:09Z","lastTransitionTime":"2026-01-27T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.698816 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.699226 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.699301 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.699401 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.699473 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:09Z","lastTransitionTime":"2026-01-27T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.802629 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.803208 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.803411 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.803596 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.803787 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:09Z","lastTransitionTime":"2026-01-27T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.908116 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.908151 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.908160 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.908178 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:09 crc kubenswrapper[4872]: I0127 06:55:09.908187 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:09Z","lastTransitionTime":"2026-01-27T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.010672 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.010716 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.010726 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.010739 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.010749 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:10Z","lastTransitionTime":"2026-01-27T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.097628 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.097650 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.097750 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:10 crc kubenswrapper[4872]: E0127 06:55:10.097922 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:10 crc kubenswrapper[4872]: E0127 06:55:10.098033 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:10 crc kubenswrapper[4872]: E0127 06:55:10.098122 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.104052 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:15:31.609067713 +0000 UTC Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.113330 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.113366 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.113376 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.113393 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.113402 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:10Z","lastTransitionTime":"2026-01-27T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.215660 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.215688 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.215699 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.215713 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.215721 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:10Z","lastTransitionTime":"2026-01-27T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.318159 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.318195 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.318208 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.318223 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.318235 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:10Z","lastTransitionTime":"2026-01-27T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.420778 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.420813 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.420863 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.420883 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.420895 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:10Z","lastTransitionTime":"2026-01-27T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.522882 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.522924 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.522940 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.522957 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.522969 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:10Z","lastTransitionTime":"2026-01-27T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.625930 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.625965 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.625974 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.625986 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.625996 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:10Z","lastTransitionTime":"2026-01-27T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.728833 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.728890 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.728917 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.728957 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.728967 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:10Z","lastTransitionTime":"2026-01-27T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.831028 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.831068 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.831077 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.831091 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.831101 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:10Z","lastTransitionTime":"2026-01-27T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.932815 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.932878 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.932890 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.932905 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:10 crc kubenswrapper[4872]: I0127 06:55:10.932916 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:10Z","lastTransitionTime":"2026-01-27T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.035040 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.035095 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.035107 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.035127 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.035138 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:11Z","lastTransitionTime":"2026-01-27T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.097514 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:11 crc kubenswrapper[4872]: E0127 06:55:11.097689 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.104803 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 01:54:58.903936893 +0000 UTC Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.137831 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.137925 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.137944 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.137966 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.137982 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:11Z","lastTransitionTime":"2026-01-27T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.221540 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.221576 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.221584 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.221602 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.221612 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:11Z","lastTransitionTime":"2026-01-27T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:11 crc kubenswrapper[4872]: E0127 06:55:11.236732 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.240680 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.240758 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.240779 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.240801 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.240816 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:11Z","lastTransitionTime":"2026-01-27T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:11 crc kubenswrapper[4872]: E0127 06:55:11.255270 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.259125 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.259188 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.259204 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.259229 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.259245 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:11Z","lastTransitionTime":"2026-01-27T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:11 crc kubenswrapper[4872]: E0127 06:55:11.272302 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.276759 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.276796 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.276817 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.276857 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.276869 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:11Z","lastTransitionTime":"2026-01-27T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:11 crc kubenswrapper[4872]: E0127 06:55:11.293386 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.296696 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.296730 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.296742 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.296760 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.296771 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:11Z","lastTransitionTime":"2026-01-27T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:11 crc kubenswrapper[4872]: E0127 06:55:11.308068 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:11 crc kubenswrapper[4872]: E0127 06:55:11.308165 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.309401 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.309432 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.309443 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.309462 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.309474 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:11Z","lastTransitionTime":"2026-01-27T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.411735 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.411771 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.411779 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.411793 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.411802 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:11Z","lastTransitionTime":"2026-01-27T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.514193 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.514221 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.514229 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.514243 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.514251 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:11Z","lastTransitionTime":"2026-01-27T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.616913 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.616948 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.616959 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.616974 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.616984 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:11Z","lastTransitionTime":"2026-01-27T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.719397 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.719434 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.719444 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.719457 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.719466 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:11Z","lastTransitionTime":"2026-01-27T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.821440 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.821486 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.821500 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.821519 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.821534 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:11Z","lastTransitionTime":"2026-01-27T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.923955 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.923990 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.923998 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.924011 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:11 crc kubenswrapper[4872]: I0127 06:55:11.924020 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:11Z","lastTransitionTime":"2026-01-27T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.026630 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.026665 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.026675 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.026688 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.026697 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:12Z","lastTransitionTime":"2026-01-27T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.097456 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.097492 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.097492 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:12 crc kubenswrapper[4872]: E0127 06:55:12.097592 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:12 crc kubenswrapper[4872]: E0127 06:55:12.097736 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:12 crc kubenswrapper[4872]: E0127 06:55:12.097794 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.105340 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:51:46.314795092 +0000 UTC Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.129385 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.129473 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.129490 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.129506 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.129517 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:12Z","lastTransitionTime":"2026-01-27T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.230863 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.230898 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.230906 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.230919 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.230927 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:12Z","lastTransitionTime":"2026-01-27T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.333282 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.333323 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.333333 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.333348 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.333357 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:12Z","lastTransitionTime":"2026-01-27T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.435465 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.435718 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.435813 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.435921 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.436044 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:12Z","lastTransitionTime":"2026-01-27T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.541694 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.541750 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.541765 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.541785 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.541800 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:12Z","lastTransitionTime":"2026-01-27T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.644955 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.645447 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.645548 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.645628 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.645712 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:12Z","lastTransitionTime":"2026-01-27T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.748205 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.748257 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.748270 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.748288 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.748300 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:12Z","lastTransitionTime":"2026-01-27T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.850356 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.850569 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.850693 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.850787 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.850925 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:12Z","lastTransitionTime":"2026-01-27T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.953287 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.953553 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.953633 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.953720 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:12 crc kubenswrapper[4872]: I0127 06:55:12.953834 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:12Z","lastTransitionTime":"2026-01-27T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.056395 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.056429 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.056445 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.056468 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.056479 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:13Z","lastTransitionTime":"2026-01-27T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.098237 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:13 crc kubenswrapper[4872]: E0127 06:55:13.098401 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.099075 4872 scope.go:117] "RemoveContainer" containerID="d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c" Jan 27 06:55:13 crc kubenswrapper[4872]: E0127 06:55:13.099528 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.106287 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:46:13.923240043 +0000 UTC Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.159254 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.159336 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.159348 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.159362 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.159370 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:13Z","lastTransitionTime":"2026-01-27T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.261143 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.261180 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.261191 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.261208 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.261223 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:13Z","lastTransitionTime":"2026-01-27T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.363618 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.363657 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.363667 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.363681 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.363691 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:13Z","lastTransitionTime":"2026-01-27T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.465834 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.465885 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.465903 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.465921 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.465932 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:13Z","lastTransitionTime":"2026-01-27T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.568565 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.568600 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.568612 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.568627 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.568637 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:13Z","lastTransitionTime":"2026-01-27T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.671195 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.671232 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.671241 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.671255 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.671263 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:13Z","lastTransitionTime":"2026-01-27T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.778054 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.778104 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.778117 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.778132 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.778142 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:13Z","lastTransitionTime":"2026-01-27T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.881116 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.881158 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.881167 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.881181 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.881192 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:13Z","lastTransitionTime":"2026-01-27T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.983908 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.983970 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.983982 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.983999 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:13 crc kubenswrapper[4872]: I0127 06:55:13.984011 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:13Z","lastTransitionTime":"2026-01-27T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.086644 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.086676 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.086686 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.086698 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.086708 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:14Z","lastTransitionTime":"2026-01-27T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.097965 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.097980 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:14 crc kubenswrapper[4872]: E0127 06:55:14.098058 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.099058 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:14 crc kubenswrapper[4872]: E0127 06:55:14.099110 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:14 crc kubenswrapper[4872]: E0127 06:55:14.099340 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.106403 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 11:31:57.669258311 +0000 UTC Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.112750 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.113543 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.124134 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.136710 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.147831 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.159570 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.174605 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.184298 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84772df7-3165-4527-b995-3d047610efe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f7a74a15b1cbc8629163aad3dccab4a717289d1f60a110fc7eb138b270e3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bacf5c751024ce9db129ca3707df22d9f73e42434168cdda4e48145e96bcdd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc80002d1a062927c4a493308999190edc5cadd30fd7fc5af2f7ed7920c3ede9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.190156 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.190191 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.190204 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.190220 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.190232 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:14Z","lastTransitionTime":"2026-01-27T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.196742 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.206953 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.217428 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.229940 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.277635 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.292915 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.292952 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.292963 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.292996 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.293010 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:14Z","lastTransitionTime":"2026-01-27T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.297346 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b8c08bb7d393cd9ee35fe3ab28fd559dd83a249f46189803c4b3fc5a4aff20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:53Z\\\",\\\"message\\\":\\\"2026-01-27T06:54:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756\\\\n2026-01-27T06:54:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756 to /host/opt/cni/bin/\\\\n2026-01-27T06:54:08Z [verbose] multus-daemon started\\\\n2026-01-27T06:54:08Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:54:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.317925 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"/apis/informers/externalversions/factory.go:140\\\\nI0127 06:55:01.026632 6776 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:55:01.027399 6776 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0127 06:55:01.034479 6776 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:55:01.034554 6776 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 06:55:01.034580 6776 factory.go:656] Stopping watch factory\\\\nI0127 06:55:01.034599 6776 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:55:01.034611 6776 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:55:01.047465 6776 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0127 06:55:01.047488 6776 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0127 06:55:01.047560 6776 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:55:01.047587 6776 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:55:01.047661 6776 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:55:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.327756 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.337556 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0dd76d-6c6f-4cf0-bd27-b55c9a616e37\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://960b06167d64b9abead49c9a5166906a94b96d16d2fc70ea4e52b5b0806aa9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a2b7935217b139e04adab18e5710ef18d8e1c41200b39d489883921d83d3923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a2b7935217b139e04adab18e5710ef18d8e1c41200b39d489883921d83d3923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.346957 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.356468 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.395286 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.395643 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.395761 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.395883 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.395994 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:14Z","lastTransitionTime":"2026-01-27T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.498099 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.498139 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.498150 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.498166 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.498176 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:14Z","lastTransitionTime":"2026-01-27T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.599791 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.599824 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.599833 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.599866 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.599876 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:14Z","lastTransitionTime":"2026-01-27T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.702575 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.702630 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.702644 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.702668 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.702686 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:14Z","lastTransitionTime":"2026-01-27T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.804363 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.804631 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.804715 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.804799 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.804934 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:14Z","lastTransitionTime":"2026-01-27T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.907235 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.907430 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.907487 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.907545 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:14 crc kubenswrapper[4872]: I0127 06:55:14.907625 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:14Z","lastTransitionTime":"2026-01-27T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.010263 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.010308 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.010323 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.010339 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.010352 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:15Z","lastTransitionTime":"2026-01-27T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.097429 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:15 crc kubenswrapper[4872]: E0127 06:55:15.097770 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.106586 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:45:38.324181975 +0000 UTC Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.113071 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.113532 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.113631 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.113711 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.113791 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:15Z","lastTransitionTime":"2026-01-27T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.216275 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.216311 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.216322 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.216336 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.216345 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:15Z","lastTransitionTime":"2026-01-27T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.319975 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.320029 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.320047 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.320071 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.320083 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:15Z","lastTransitionTime":"2026-01-27T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.423438 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.423882 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.424419 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.424647 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.424725 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:15Z","lastTransitionTime":"2026-01-27T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.527724 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.527793 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.527807 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.527831 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.527870 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:15Z","lastTransitionTime":"2026-01-27T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.630936 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.630980 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.630992 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.631006 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.631017 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:15Z","lastTransitionTime":"2026-01-27T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.733918 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.733970 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.733984 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.734008 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.734022 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:15Z","lastTransitionTime":"2026-01-27T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.836899 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.836948 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.836962 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.836981 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.836995 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:15Z","lastTransitionTime":"2026-01-27T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.940445 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.940488 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.940497 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.940513 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:15 crc kubenswrapper[4872]: I0127 06:55:15.940523 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:15Z","lastTransitionTime":"2026-01-27T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.043732 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.043795 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.043807 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.043830 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.043864 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:16Z","lastTransitionTime":"2026-01-27T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.098250 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.098351 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.098345 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:16 crc kubenswrapper[4872]: E0127 06:55:16.098552 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:16 crc kubenswrapper[4872]: E0127 06:55:16.098763 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:16 crc kubenswrapper[4872]: E0127 06:55:16.099119 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.107195 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 09:57:28.361861396 +0000 UTC Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.147191 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.147228 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.147241 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.147262 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.147275 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:16Z","lastTransitionTime":"2026-01-27T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.250923 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.250977 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.250988 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.251006 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.251016 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:16Z","lastTransitionTime":"2026-01-27T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.354255 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.354349 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.354361 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.354403 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.354415 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:16Z","lastTransitionTime":"2026-01-27T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.458005 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.458089 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.458132 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.458156 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.458170 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:16Z","lastTransitionTime":"2026-01-27T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.561303 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.561372 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.561386 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.561409 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.561427 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:16Z","lastTransitionTime":"2026-01-27T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.664601 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.664651 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.664665 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.664687 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.664737 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:16Z","lastTransitionTime":"2026-01-27T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.769163 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.769195 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.769204 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.769218 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.769228 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:16Z","lastTransitionTime":"2026-01-27T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.871921 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.871991 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.872006 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.872034 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.872052 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:16Z","lastTransitionTime":"2026-01-27T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.975093 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.975172 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.975190 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.975215 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:16 crc kubenswrapper[4872]: I0127 06:55:16.975233 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:16Z","lastTransitionTime":"2026-01-27T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.078328 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.078387 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.078402 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.078426 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.078444 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:17Z","lastTransitionTime":"2026-01-27T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.097576 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:17 crc kubenswrapper[4872]: E0127 06:55:17.097745 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.107934 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:48:14.603893716 +0000 UTC Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.182430 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.182474 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.182498 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.182517 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.182530 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:17Z","lastTransitionTime":"2026-01-27T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.284979 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.285028 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.285038 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.285053 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.285063 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:17Z","lastTransitionTime":"2026-01-27T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.388073 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.388133 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.388149 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.388172 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.388189 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:17Z","lastTransitionTime":"2026-01-27T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.491058 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.491108 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.491125 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.491145 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.491156 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:17Z","lastTransitionTime":"2026-01-27T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.593913 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.593976 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.593988 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.594003 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.594014 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:17Z","lastTransitionTime":"2026-01-27T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.705643 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.706023 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.706050 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.706079 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.706100 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:17Z","lastTransitionTime":"2026-01-27T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.809973 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.810040 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.810060 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.810094 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.810117 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:17Z","lastTransitionTime":"2026-01-27T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.913415 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.913461 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.913478 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.913500 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:17 crc kubenswrapper[4872]: I0127 06:55:17.913515 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:17Z","lastTransitionTime":"2026-01-27T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.017096 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.017180 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.017210 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.017248 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.017276 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:18Z","lastTransitionTime":"2026-01-27T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.097766 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.097796 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.097966 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:18 crc kubenswrapper[4872]: E0127 06:55:18.098060 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:18 crc kubenswrapper[4872]: E0127 06:55:18.098092 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:18 crc kubenswrapper[4872]: E0127 06:55:18.098143 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.108343 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 00:01:17.426219418 +0000 UTC Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.120509 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.120993 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.121159 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.121308 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.121462 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:18Z","lastTransitionTime":"2026-01-27T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.224289 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.224330 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.224338 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.224352 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.224361 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:18Z","lastTransitionTime":"2026-01-27T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.327961 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.328007 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.328020 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.328040 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.328051 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:18Z","lastTransitionTime":"2026-01-27T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.431104 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.431164 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.431176 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.431194 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.431207 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:18Z","lastTransitionTime":"2026-01-27T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.533679 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.533754 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.533767 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.533786 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.533796 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:18Z","lastTransitionTime":"2026-01-27T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.636020 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.636085 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.636099 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.636121 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.636134 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:18Z","lastTransitionTime":"2026-01-27T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.739808 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.739888 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.739904 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.739923 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.739937 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:18Z","lastTransitionTime":"2026-01-27T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.842657 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.842723 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.842733 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.842758 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.842771 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:18Z","lastTransitionTime":"2026-01-27T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.946200 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.946268 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.946281 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.946305 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:18 crc kubenswrapper[4872]: I0127 06:55:18.946324 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:18Z","lastTransitionTime":"2026-01-27T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.048799 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.048860 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.048877 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.048894 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.048907 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:19Z","lastTransitionTime":"2026-01-27T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.097879 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:19 crc kubenswrapper[4872]: E0127 06:55:19.098033 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.108762 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:33:40.910324485 +0000 UTC Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.152093 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.152148 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.152162 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.152182 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.152194 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:19Z","lastTransitionTime":"2026-01-27T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.255488 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.255561 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.255576 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.255603 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.255622 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:19Z","lastTransitionTime":"2026-01-27T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.359627 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.359679 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.359689 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.359704 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.359715 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:19Z","lastTransitionTime":"2026-01-27T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.462316 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.462377 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.462391 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.462427 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.462447 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:19Z","lastTransitionTime":"2026-01-27T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.565594 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.565672 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.565687 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.565715 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.565739 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:19Z","lastTransitionTime":"2026-01-27T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.668048 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.668142 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.668162 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.668197 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.668219 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:19Z","lastTransitionTime":"2026-01-27T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.770883 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.770954 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.770966 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.770988 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.771002 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:19Z","lastTransitionTime":"2026-01-27T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.873642 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.873684 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.873695 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.873714 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.873725 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:19Z","lastTransitionTime":"2026-01-27T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.976382 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.976426 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.976443 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.976461 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:19 crc kubenswrapper[4872]: I0127 06:55:19.976473 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:19Z","lastTransitionTime":"2026-01-27T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.079041 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.079085 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.079101 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.079115 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.079126 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:20Z","lastTransitionTime":"2026-01-27T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.097363 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.097363 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:20 crc kubenswrapper[4872]: E0127 06:55:20.097532 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.097384 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:20 crc kubenswrapper[4872]: E0127 06:55:20.097686 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:20 crc kubenswrapper[4872]: E0127 06:55:20.097759 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.109162 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:30:27.811047884 +0000 UTC Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.181782 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.181830 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.181860 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.181877 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.181889 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:20Z","lastTransitionTime":"2026-01-27T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.285268 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.285328 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.285346 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.285365 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.285377 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:20Z","lastTransitionTime":"2026-01-27T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.388499 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.388558 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.388568 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.388581 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.388590 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:20Z","lastTransitionTime":"2026-01-27T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.490771 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.490877 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.490890 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.490909 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.490922 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:20Z","lastTransitionTime":"2026-01-27T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.593620 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.593660 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.593670 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.593721 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.593733 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:20Z","lastTransitionTime":"2026-01-27T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.696229 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.696274 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.696286 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.696305 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.696320 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:20Z","lastTransitionTime":"2026-01-27T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.798447 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.798520 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.798531 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.798543 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.798552 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:20Z","lastTransitionTime":"2026-01-27T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.902334 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.902396 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.902408 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.902429 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:20 crc kubenswrapper[4872]: I0127 06:55:20.902445 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:20Z","lastTransitionTime":"2026-01-27T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.005863 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.005959 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.005980 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.006007 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.006025 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:21Z","lastTransitionTime":"2026-01-27T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.097925 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:21 crc kubenswrapper[4872]: E0127 06:55:21.098156 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.109265 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:17:59.026870501 +0000 UTC Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.109522 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.109592 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.109619 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.109651 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.109674 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:21Z","lastTransitionTime":"2026-01-27T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.211751 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.211786 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.211797 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.211813 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.211823 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:21Z","lastTransitionTime":"2026-01-27T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.314088 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.314117 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.314126 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.314139 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.314148 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:21Z","lastTransitionTime":"2026-01-27T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.417141 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.417181 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.417190 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.417205 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.417215 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:21Z","lastTransitionTime":"2026-01-27T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.519404 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.519755 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.519831 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.519917 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.519981 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:21Z","lastTransitionTime":"2026-01-27T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.603104 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.603146 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.603160 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.603177 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.603188 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:21Z","lastTransitionTime":"2026-01-27T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:21 crc kubenswrapper[4872]: E0127 06:55:21.615066 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:21Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.619093 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.619126 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.619137 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.619152 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.619161 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:21Z","lastTransitionTime":"2026-01-27T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:21 crc kubenswrapper[4872]: E0127 06:55:21.634306 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:21Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.637520 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.637563 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.637576 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.637594 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.637607 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:21Z","lastTransitionTime":"2026-01-27T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:21 crc kubenswrapper[4872]: E0127 06:55:21.648725 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:21Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.652360 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.652421 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.652432 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.652451 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.652463 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:21Z","lastTransitionTime":"2026-01-27T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:21 crc kubenswrapper[4872]: E0127 06:55:21.668953 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:21Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.672437 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.672476 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.672509 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.672527 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.672539 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:21Z","lastTransitionTime":"2026-01-27T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:21 crc kubenswrapper[4872]: E0127 06:55:21.686620 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:21Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:21 crc kubenswrapper[4872]: E0127 06:55:21.686851 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.688713 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.688739 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.688747 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.688759 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.688769 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:21Z","lastTransitionTime":"2026-01-27T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.790433 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.790467 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.790478 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.790493 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.790504 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:21Z","lastTransitionTime":"2026-01-27T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.892531 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.892565 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.892574 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.892587 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.892612 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:21Z","lastTransitionTime":"2026-01-27T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.995242 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.995282 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.995294 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.995313 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:21 crc kubenswrapper[4872]: I0127 06:55:21.995326 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:21Z","lastTransitionTime":"2026-01-27T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.097026 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.097112 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.097026 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.097326 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:22 crc kubenswrapper[4872]: E0127 06:55:22.097328 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.097344 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.097356 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.097369 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.097381 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:22Z","lastTransitionTime":"2026-01-27T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:22 crc kubenswrapper[4872]: E0127 06:55:22.097402 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:22 crc kubenswrapper[4872]: E0127 06:55:22.097455 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.109731 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 03:31:36.000334868 +0000 UTC Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.199756 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.199795 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.199806 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.199821 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.199833 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:22Z","lastTransitionTime":"2026-01-27T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.302331 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.302552 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.302570 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.302587 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.302598 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:22Z","lastTransitionTime":"2026-01-27T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.404634 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.404688 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.404705 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.404725 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.404739 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:22Z","lastTransitionTime":"2026-01-27T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.507265 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.507321 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.507333 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.507352 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.507366 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:22Z","lastTransitionTime":"2026-01-27T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.610089 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.610192 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.610234 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.610254 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.610297 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:22Z","lastTransitionTime":"2026-01-27T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.631474 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs\") pod \"network-metrics-daemon-nstjz\" (UID: \"f22e033f-46c7-4d57-a333-e1eee5cd3091\") " pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:22 crc kubenswrapper[4872]: E0127 06:55:22.631688 4872 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:55:22 crc kubenswrapper[4872]: E0127 06:55:22.631785 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs podName:f22e033f-46c7-4d57-a333-e1eee5cd3091 nodeName:}" failed. No retries permitted until 2026-01-27 06:56:26.631760572 +0000 UTC m=+163.159235958 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs") pod "network-metrics-daemon-nstjz" (UID: "f22e033f-46c7-4d57-a333-e1eee5cd3091") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.713635 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.713688 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.713702 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.713719 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.713730 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:22Z","lastTransitionTime":"2026-01-27T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.816424 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.816468 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.816479 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.816494 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.816506 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:22Z","lastTransitionTime":"2026-01-27T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.918322 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.918366 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.918377 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.918394 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:22 crc kubenswrapper[4872]: I0127 06:55:22.918404 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:22Z","lastTransitionTime":"2026-01-27T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.020685 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.020716 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.020724 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.020737 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.020745 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:23Z","lastTransitionTime":"2026-01-27T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.097196 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:23 crc kubenswrapper[4872]: E0127 06:55:23.097343 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.110635 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:56:15.215306318 +0000 UTC Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.123203 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.123242 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.123252 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.123268 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.123302 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:23Z","lastTransitionTime":"2026-01-27T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.225535 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.225567 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.225576 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.225590 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.225600 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:23Z","lastTransitionTime":"2026-01-27T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.327813 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.327875 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.327886 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.327903 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.327915 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:23Z","lastTransitionTime":"2026-01-27T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.430668 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.430743 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.430754 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.430768 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.430777 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:23Z","lastTransitionTime":"2026-01-27T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.533972 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.534010 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.534021 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.534036 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.534048 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:23Z","lastTransitionTime":"2026-01-27T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.636988 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.637031 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.637040 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.637058 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.637068 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:23Z","lastTransitionTime":"2026-01-27T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.739035 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.739091 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.739101 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.739119 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.739131 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:23Z","lastTransitionTime":"2026-01-27T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.842414 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.842495 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.842508 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.842533 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.842550 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:23Z","lastTransitionTime":"2026-01-27T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.945036 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.945092 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.945104 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.945122 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:23 crc kubenswrapper[4872]: I0127 06:55:23.945132 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:23Z","lastTransitionTime":"2026-01-27T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.048231 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.048283 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.048300 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.048319 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.048332 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:24Z","lastTransitionTime":"2026-01-27T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.097518 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:24 crc kubenswrapper[4872]: E0127 06:55:24.097642 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.097793 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.097805 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:24 crc kubenswrapper[4872]: E0127 06:55:24.097905 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:24 crc kubenswrapper[4872]: E0127 06:55:24.098128 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.109692 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.111738 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:15:23.667182468 +0000 UTC Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.120652 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ea42312-a362-48cd-8387-34c060df18a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://968d37d521e74241a87419a42be8aa6695fd147461c822838207097b53350e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7tgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkvlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.131630 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nvjgr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8575a338-fc73-4413-ab05-0fdfdd6bdf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b8c08bb7d393cd9ee35fe3ab28fd559dd83a249f46189803c4b3fc5a4aff20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:54:53Z\\\",\\\"message\\\":\\\"2026-01-27T06:54:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756\\\\n2026-01-27T06:54:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c45b83a-ae43-44ff-9b96-28b3d9cf7756 to /host/opt/cni/bin/\\\\n2026-01-27T06:54:08Z [verbose] multus-daemon started\\\\n2026-01-27T06:54:08Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:54:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99glr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nvjgr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.150430 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b62e2eec-d750-4b03-90a4-4082a5d8ca18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:55:01Z\\\",\\\"message\\\":\\\"/apis/informers/externalversions/factory.go:140\\\\nI0127 06:55:01.026632 6776 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:55:01.027399 6776 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0127 06:55:01.034479 6776 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:55:01.034554 6776 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 06:55:01.034580 6776 factory.go:656] Stopping watch factory\\\\nI0127 06:55:01.034599 6776 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:55:01.034611 6776 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:55:01.047465 6776 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0127 06:55:01.047488 6776 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0127 06:55:01.047560 6776 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:55:01.047587 6776 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:55:01.047661 6776 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:55:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xnln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ww8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.152179 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.152218 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.152226 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.152240 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.152250 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:24Z","lastTransitionTime":"2026-01-27T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.160715 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jfj5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"731616c6-fda8-4ce3-b678-42c61255141c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d6d0c1bcdff3ba5f0260f9b91eeab4d662919ad90b60e93c53b0fd20c0e6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b24c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jfj5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.169320 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0dd76d-6c6f-4cf0-bd27-b55c9a616e37\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://960b06167d64b9abead49c9a5166906a94b96d16d2fc70ea4e52b5b0806aa9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a2b7935217b139e04adab18e5710ef18d8e1c41200b39d489883921d83d3923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a2b7935217b139e04adab18e5710ef18d8e1c41200b39d489883921d83d3923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.178328 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89bc130f-996f-40e6-9015-b2023a608044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e6c7a4be23eda515f23c3e318f7c42ac1484adaa8677ce44ec7143383a6d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc23fe29f9d4d2d9e6edfd276ad183b8320ad31dfe2badbbde38d4bdceddb2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-544tw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-whdsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.187946 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nstjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f22e033f-46c7-4d57-a333-e1eee5cd3091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvndr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nstjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.201407 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d3d5eed-f827-4822-86b4-b98a91ac772a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 06:53:57.901955 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:53:57.904366 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2765791245/tls.crt::/tmp/serving-cert-2765791245/tls.key\\\\\\\"\\\\nI0127 06:54:03.344990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:54:03.347662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:54:03.347681 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:54:03.347704 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:54:03.347710 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:54:03.353465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 06:54:03.353481 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 06:54:03.353494 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353501 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:54:03.353505 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:54:03.353509 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:54:03.353512 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:54:03.353515 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 06:54:03.355774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.212700 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a698e6b-74e8-4acc-9a3f-3b29b7b21d88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870f383a1988f87895055e6dcdb91de478051d43da0da986adb72df19e4b299e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c84326c6e19dc2c31e4995d2db98f1fabe68663aae9c395a62af3104d5830de0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac7ab747de00c6181018b3cc1161ea924b449511b336a92f15b45ef0f9a88efb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.224723 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e442ab11e63a77c5fba0b9adc704327453af85f459dc4282040684981d4aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3581bf2a77506e597274cd0c8c843dc72af08ec731812cd76d46ca765c950fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.235944 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.248132 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.255116 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.255156 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.255167 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.255182 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.255192 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:24Z","lastTransitionTime":"2026-01-27T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.262247 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7097f1e-1b27-4ad4-a772-f62ec2fae899\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36bbecbc673690b221cb7745052e032d1336e9d0a8917a80380ee9e748dbd94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b0d2342ca5381deef411a9550fc40e7830a4e42ff79b0b6fba6b4cc653f32c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c90e71294244db1dd694c9cc3061d9658db51caf9645ff2112e358b9e21a369a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52544c7b7164e6e3a20f7c3b4e0bffd9036ed39678c36b2685446a2ae9feb749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d721d26fb4cb35730d0580c794f390b907c4cef45a596e75c4db1dc65d5b6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b683ca18229d0beada8a706dfe74604598ef6a6c71aeee079f68f4f974b1921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6b352e87bfb3d0f71bec9b17af8a9e74c6181120b8688fa468427f579bd969e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7mxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tk2w6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.279753 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc1734e-a7f5-4d8d-9700-8ac875a96407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d0e2a1336a9c07f3cdf109e66a47652d056fff7e31b2a683cb795310cfb0fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7ed2e2a4017530386f9bd9e8278d0000ee396e297573a3bbbeb1cf4266b13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7e498ad23b15a6b39f3af7bb31f9aa5ce5a4a630ae6d5f742522d057770dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1898c6f24c628b207b666cf2db01394c39927a26008d384f52ee851b3d2df23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c8efc561404cb0a4245d43c95d492c08a3024c2b622b318930f8ab2c7cd422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a309889410e408cfde09a06793a72c65c69b9812783fc7b1deb663d07ea400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a309889410e408cfde09a06793a72c65c69b9812783fc7b1deb663d07ea400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e1107d6bc151aa51fd301fc6617969004a4e72ccc7c145cc914e7b50c8e28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e1107d6bc151aa51fd301fc6617969004a4e72ccc7c145cc914e7b50c8e28b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47525a2a43bb595707e690c1bff883f08b95213e396d60d28794c7aba6e633b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47525a2a43bb595707e690c1bff883f08b95213e396d60d28794c7aba6e633b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.290778 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84772df7-3165-4527-b995-3d047610efe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f7a74a15b1cbc8629163aad3dccab4a717289d1f60a110fc7eb138b270e3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bacf5c751024ce9db129ca3707df22d9f73e42434168cdda4e48145e96bcdd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc80002d1a062927c4a493308999190edc5cadd30fd7fc5af2f7ed7920c3ede9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.304962 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.319356 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9dc265c148353b7832bb3e029c0332cbecdab956f46adbdec1f11fae13797c58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.332301 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:24Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.358213 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.358269 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.358292 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.358316 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.358334 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:24Z","lastTransitionTime":"2026-01-27T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.461592 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.461644 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.461656 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.461673 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.461684 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:24Z","lastTransitionTime":"2026-01-27T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.564369 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.564401 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.564409 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.564423 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.564432 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:24Z","lastTransitionTime":"2026-01-27T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.667887 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.668026 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.668046 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.668066 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.668467 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:24Z","lastTransitionTime":"2026-01-27T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.771127 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.771161 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.771170 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.771184 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.771193 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:24Z","lastTransitionTime":"2026-01-27T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.873986 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.874032 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.874041 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.874059 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.874075 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:24Z","lastTransitionTime":"2026-01-27T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.976870 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.976924 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.976933 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.976946 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:24 crc kubenswrapper[4872]: I0127 06:55:24.976955 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:24Z","lastTransitionTime":"2026-01-27T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.079969 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.080055 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.080219 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.080449 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.080484 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:25Z","lastTransitionTime":"2026-01-27T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.097595 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:25 crc kubenswrapper[4872]: E0127 06:55:25.097836 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.111947 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:24:03.651787184 +0000 UTC Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.184444 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.184504 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.184518 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.184536 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.184547 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:25Z","lastTransitionTime":"2026-01-27T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.287974 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.289514 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.289555 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.289591 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.289620 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:25Z","lastTransitionTime":"2026-01-27T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.393122 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.393168 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.393178 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.393199 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.393211 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:25Z","lastTransitionTime":"2026-01-27T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.496132 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.496223 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.496242 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.496271 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.496289 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:25Z","lastTransitionTime":"2026-01-27T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.598727 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.598770 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.598778 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.598791 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.598800 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:25Z","lastTransitionTime":"2026-01-27T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.701435 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.701502 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.701514 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.701531 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.701542 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:25Z","lastTransitionTime":"2026-01-27T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.803408 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.803448 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.803458 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.803471 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.803480 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:25Z","lastTransitionTime":"2026-01-27T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.905734 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.905778 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.905789 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.905803 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:25 crc kubenswrapper[4872]: I0127 06:55:25.905814 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:25Z","lastTransitionTime":"2026-01-27T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.007964 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.008000 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.008008 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.008019 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.008028 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:26Z","lastTransitionTime":"2026-01-27T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.098184 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.098184 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.098624 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.098715 4872 scope.go:117] "RemoveContainer" containerID="d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c" Jan 27 06:55:26 crc kubenswrapper[4872]: E0127 06:55:26.099032 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:26 crc kubenswrapper[4872]: E0127 06:55:26.099056 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" Jan 27 06:55:26 crc kubenswrapper[4872]: E0127 06:55:26.099128 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:26 crc kubenswrapper[4872]: E0127 06:55:26.099430 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.109866 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.109907 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.109919 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.109933 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.109943 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:26Z","lastTransitionTime":"2026-01-27T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.112157 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:53:11.104127183 +0000 UTC Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.212275 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.212306 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.212319 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.212333 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.212343 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:26Z","lastTransitionTime":"2026-01-27T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.314367 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.314407 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.314417 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.314432 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.314444 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:26Z","lastTransitionTime":"2026-01-27T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.416514 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.416582 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.416594 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.416610 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.416628 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:26Z","lastTransitionTime":"2026-01-27T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.518858 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.518893 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.518905 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.518918 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.518928 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:26Z","lastTransitionTime":"2026-01-27T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.622110 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.622673 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.622807 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.623067 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.623173 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:26Z","lastTransitionTime":"2026-01-27T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.726059 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.726116 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.726126 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.726145 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.726157 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:26Z","lastTransitionTime":"2026-01-27T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.829204 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.829342 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.829376 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.829415 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.829440 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:26Z","lastTransitionTime":"2026-01-27T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.932533 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.932588 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.932604 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.932631 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:26 crc kubenswrapper[4872]: I0127 06:55:26.932657 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:26Z","lastTransitionTime":"2026-01-27T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.036454 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.036555 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.036582 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.036616 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.036648 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:27Z","lastTransitionTime":"2026-01-27T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.097108 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:27 crc kubenswrapper[4872]: E0127 06:55:27.097294 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.112543 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 21:35:04.011761192 +0000 UTC Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.140720 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.140811 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.140837 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.140902 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.140929 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:27Z","lastTransitionTime":"2026-01-27T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.244878 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.244931 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.244950 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.244976 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.244998 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:27Z","lastTransitionTime":"2026-01-27T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.347480 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.347519 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.347527 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.347543 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.347553 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:27Z","lastTransitionTime":"2026-01-27T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.450547 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.450641 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.450658 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.450683 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.450695 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:27Z","lastTransitionTime":"2026-01-27T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.553601 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.553665 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.553684 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.553709 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.553726 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:27Z","lastTransitionTime":"2026-01-27T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.656009 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.656107 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.656121 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.656143 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.656168 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:27Z","lastTransitionTime":"2026-01-27T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.759091 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.759187 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.759201 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.759223 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.759245 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:27Z","lastTransitionTime":"2026-01-27T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.861124 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.861155 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.861162 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.861174 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.861183 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:27Z","lastTransitionTime":"2026-01-27T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.963730 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.963769 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.963779 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.963793 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:27 crc kubenswrapper[4872]: I0127 06:55:27.963801 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:27Z","lastTransitionTime":"2026-01-27T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.066308 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.066355 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.066370 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.066390 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.066405 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:28Z","lastTransitionTime":"2026-01-27T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.097428 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.097594 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:28 crc kubenswrapper[4872]: E0127 06:55:28.097774 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:28 crc kubenswrapper[4872]: E0127 06:55:28.097947 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.098543 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:28 crc kubenswrapper[4872]: E0127 06:55:28.098638 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.113736 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 02:12:00.872159647 +0000 UTC Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.168915 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.168951 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.168961 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.168974 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.168983 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:28Z","lastTransitionTime":"2026-01-27T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.271324 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.271371 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.271380 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.271395 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.271406 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:28Z","lastTransitionTime":"2026-01-27T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.373571 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.373607 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.373616 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.373628 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.373636 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:28Z","lastTransitionTime":"2026-01-27T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.475736 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.475800 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.475811 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.475833 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.475897 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:28Z","lastTransitionTime":"2026-01-27T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.577907 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.577941 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.577949 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.577968 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.577981 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:28Z","lastTransitionTime":"2026-01-27T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.679817 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.679889 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.679898 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.679911 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.679920 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:28Z","lastTransitionTime":"2026-01-27T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.782671 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.782706 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.782717 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.782730 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.782739 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:28Z","lastTransitionTime":"2026-01-27T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.885250 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.885316 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.885332 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.885349 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.885361 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:28Z","lastTransitionTime":"2026-01-27T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.988278 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.988317 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.988328 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.988344 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:28 crc kubenswrapper[4872]: I0127 06:55:28.988356 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:28Z","lastTransitionTime":"2026-01-27T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.090218 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.090248 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.090258 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.090273 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.090284 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:29Z","lastTransitionTime":"2026-01-27T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.097358 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:29 crc kubenswrapper[4872]: E0127 06:55:29.097463 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.113965 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:37:57.991781357 +0000 UTC Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.192628 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.192661 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.192673 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.192690 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.192702 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:29Z","lastTransitionTime":"2026-01-27T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.294982 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.295023 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.295032 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.295044 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.295052 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:29Z","lastTransitionTime":"2026-01-27T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.396719 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.396748 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.396756 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.396768 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.396776 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:29Z","lastTransitionTime":"2026-01-27T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.498678 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.498716 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.498728 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.498743 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.498753 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:29Z","lastTransitionTime":"2026-01-27T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.600583 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.600630 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.600641 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.600654 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.600663 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:29Z","lastTransitionTime":"2026-01-27T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.703197 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.703235 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.703245 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.703260 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.703271 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:29Z","lastTransitionTime":"2026-01-27T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.805472 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.805540 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.805551 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.805591 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.805603 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:29Z","lastTransitionTime":"2026-01-27T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.907669 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.907716 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.907726 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.907741 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:29 crc kubenswrapper[4872]: I0127 06:55:29.907750 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:29Z","lastTransitionTime":"2026-01-27T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.009612 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.009639 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.009647 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.009658 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.009668 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:30Z","lastTransitionTime":"2026-01-27T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.097679 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.097737 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.097785 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:30 crc kubenswrapper[4872]: E0127 06:55:30.097928 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:30 crc kubenswrapper[4872]: E0127 06:55:30.098046 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:30 crc kubenswrapper[4872]: E0127 06:55:30.098343 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.111114 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.111137 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.111147 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.111157 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.111166 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:30Z","lastTransitionTime":"2026-01-27T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.114417 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:24:14.939563648 +0000 UTC Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.212788 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.212836 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.212869 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.212882 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.212893 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:30Z","lastTransitionTime":"2026-01-27T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.315175 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.315212 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.315223 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.315238 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.315249 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:30Z","lastTransitionTime":"2026-01-27T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.417617 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.417656 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.417664 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.417679 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.417692 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:30Z","lastTransitionTime":"2026-01-27T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.520537 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.520576 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.520586 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.520600 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.520609 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:30Z","lastTransitionTime":"2026-01-27T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.623882 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.623947 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.623961 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.623986 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.624002 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:30Z","lastTransitionTime":"2026-01-27T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.727103 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.727159 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.727172 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.727189 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.727201 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:30Z","lastTransitionTime":"2026-01-27T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.831838 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.831976 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.832013 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.832049 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.832075 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:30Z","lastTransitionTime":"2026-01-27T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.935545 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.935624 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.935644 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.935674 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:30 crc kubenswrapper[4872]: I0127 06:55:30.935695 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:30Z","lastTransitionTime":"2026-01-27T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.037813 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.037871 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.037885 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.037900 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.037914 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:31Z","lastTransitionTime":"2026-01-27T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.097489 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:31 crc kubenswrapper[4872]: E0127 06:55:31.097627 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.114981 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:41:41.242966997 +0000 UTC Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.140753 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.140789 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.140798 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.140813 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.140824 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:31Z","lastTransitionTime":"2026-01-27T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.242884 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.242933 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.242958 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.242978 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.242992 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:31Z","lastTransitionTime":"2026-01-27T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.345949 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.345987 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.345996 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.346011 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.346020 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:31Z","lastTransitionTime":"2026-01-27T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.450273 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.450324 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.450334 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.450350 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.450360 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:31Z","lastTransitionTime":"2026-01-27T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.552933 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.552990 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.553002 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.553023 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.553034 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:31Z","lastTransitionTime":"2026-01-27T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.656218 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.656275 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.656288 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.656312 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.656329 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:31Z","lastTransitionTime":"2026-01-27T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.759152 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.759209 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.759222 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.759241 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.759253 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:31Z","lastTransitionTime":"2026-01-27T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.858055 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.858129 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.858144 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.858171 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.858187 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:31Z","lastTransitionTime":"2026-01-27T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:31 crc kubenswrapper[4872]: E0127 06:55:31.873969 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.879835 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.879905 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.879917 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.879938 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.879954 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:31Z","lastTransitionTime":"2026-01-27T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:31 crc kubenswrapper[4872]: E0127 06:55:31.897521 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.902561 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.902619 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.902632 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.902654 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.902668 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:31Z","lastTransitionTime":"2026-01-27T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:31 crc kubenswrapper[4872]: E0127 06:55:31.919607 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.924575 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.924670 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.924700 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.924739 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.924764 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:31Z","lastTransitionTime":"2026-01-27T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:31 crc kubenswrapper[4872]: E0127 06:55:31.949193 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.955276 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.955339 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.955350 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.955368 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.955380 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:31Z","lastTransitionTime":"2026-01-27T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:31 crc kubenswrapper[4872]: E0127 06:55:31.974131 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:55:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37e22313-b71b-4ef4-bf05-eb3dbac65b5b\\\",\\\"systemUUID\\\":\\\"91b3bc63-e466-472c-acc7-7b74e49fca03\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:31 crc kubenswrapper[4872]: E0127 06:55:31.974379 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.977348 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.977407 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.977416 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.977435 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:31 crc kubenswrapper[4872]: I0127 06:55:31.977447 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:31Z","lastTransitionTime":"2026-01-27T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.080985 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.081033 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.081044 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.081064 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.081080 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:32Z","lastTransitionTime":"2026-01-27T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.103566 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:32 crc kubenswrapper[4872]: E0127 06:55:32.103699 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.103779 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.103895 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:32 crc kubenswrapper[4872]: E0127 06:55:32.103995 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:32 crc kubenswrapper[4872]: E0127 06:55:32.104653 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.115428 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:52:17.916414519 +0000 UTC Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.183172 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.183205 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.183214 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.183227 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.183236 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:32Z","lastTransitionTime":"2026-01-27T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.285530 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.285599 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.285611 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.285627 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.285642 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:32Z","lastTransitionTime":"2026-01-27T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.387957 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.387991 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.388004 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.388020 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.388031 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:32Z","lastTransitionTime":"2026-01-27T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.489586 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.489615 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.489624 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.489636 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.489643 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:32Z","lastTransitionTime":"2026-01-27T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.592240 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.592294 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.592314 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.592332 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.592343 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:32Z","lastTransitionTime":"2026-01-27T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.694206 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.694243 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.694251 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.694265 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.694274 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:32Z","lastTransitionTime":"2026-01-27T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.796925 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.796977 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.796987 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.797001 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.797012 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:32Z","lastTransitionTime":"2026-01-27T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.899109 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.899168 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.899180 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.899200 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:32 crc kubenswrapper[4872]: I0127 06:55:32.899213 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:32Z","lastTransitionTime":"2026-01-27T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.000817 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.000937 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.000948 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.000961 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.000988 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:33Z","lastTransitionTime":"2026-01-27T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.098140 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:33 crc kubenswrapper[4872]: E0127 06:55:33.098293 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.103668 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.103698 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.103708 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.103722 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.103731 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:33Z","lastTransitionTime":"2026-01-27T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.116115 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:54:21.285413001 +0000 UTC Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.206253 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.206288 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.206296 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.206308 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.206332 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:33Z","lastTransitionTime":"2026-01-27T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.308768 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.308992 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.309006 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.309022 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.309033 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:33Z","lastTransitionTime":"2026-01-27T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.411095 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.411130 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.411139 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.411156 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.411176 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:33Z","lastTransitionTime":"2026-01-27T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.513287 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.513322 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.513332 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.513346 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.513356 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:33Z","lastTransitionTime":"2026-01-27T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.615331 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.615365 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.615373 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.615386 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.615394 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:33Z","lastTransitionTime":"2026-01-27T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.717343 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.717539 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.717553 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.717565 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.717575 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:33Z","lastTransitionTime":"2026-01-27T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.819426 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.819498 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.819513 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.819527 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.819535 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:33Z","lastTransitionTime":"2026-01-27T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.922303 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.922338 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.922349 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.922365 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:33 crc kubenswrapper[4872]: I0127 06:55:33.922377 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:33Z","lastTransitionTime":"2026-01-27T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.025225 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.025255 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.025271 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.025287 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.025297 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:34Z","lastTransitionTime":"2026-01-27T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.097759 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.097856 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.097876 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:34 crc kubenswrapper[4872]: E0127 06:55:34.098143 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:34 crc kubenswrapper[4872]: E0127 06:55:34.098229 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:34 crc kubenswrapper[4872]: E0127 06:55:34.098292 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.108598 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nf5b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a379b846-ea80-4665-a69c-79b745d168ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138807d8027721d60da15e89b27beecebef861b356cd53137ee66f48eb0dea3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljtzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:54:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nf5b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.116306 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:01:57.1827705 +0000 UTC Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.127561 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.127599 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.127610 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.127627 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.127637 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:34Z","lastTransitionTime":"2026-01-27T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.128298 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc1734e-a7f5-4d8d-9700-8ac875a96407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d0e2a1336a9c07f3cdf109e66a47652d056fff7e31b2a683cb795310cfb0fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7ed2e2a4017530386f9bd9e8278d0000ee396e297573a3bbbeb1cf4266b13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7e498ad23b15a6b39f3af7bb31f9aa5ce5a4a630ae6d5f742522d057770dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1898c6f24c628b207b666cf2db01394c39927a26008d384f52ee851b3d2df23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c8efc561404cb0a4245d43c95d492c08a3024c2b622b318930f8ab2c7cd422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a309889410e408cfde09a06793a72c65c69b9812783fc7b1deb663d07ea400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a309889410e408cfde09a06793a72c65c69b9812783fc7b1deb663d07ea400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e1107d6bc151aa51fd301fc6617969004a4e72ccc7c145cc914e7b50c8e28b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e1107d6bc151aa51fd301fc6617969004a4e72ccc7c145cc914e7b50c8e28b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://47525a2a43bb595707e690c1bff883f08b95213e396d60d28794c7aba6e633b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47525a2a43bb595707e690c1bff883f08b95213e396d60d28794c7aba6e633b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.138578 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84772df7-3165-4527-b995-3d047610efe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f7a74a15b1cbc8629163aad3dccab4a717289d1f60a110fc7eb138b270e3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bacf5c751024ce9db129ca3707df22d9f73e42434168cdda4e48145e96bcdd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc80002d1a062927c4a493308999190edc5cadd30fd7fc5af2f7ed7920c3ede9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50499e01f9702ddc89a7e12e5c001c18a01c933fee1913c2338feb3a01d9354\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.150284 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:54:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cdda1b5f73ab3d46e928e638378ae04c28c766bf559447c91de3389f2ae1316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:55:34Z is after 2025-08-24T17:21:41Z" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.190862 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jfj5q" podStartSLOduration=90.190829338 podStartE2EDuration="1m30.190829338s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:55:34.181978445 +0000 UTC m=+110.709453641" watchObservedRunningTime="2026-01-27 06:55:34.190829338 +0000 UTC m=+110.718304534" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.220822 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podStartSLOduration=90.220805578 podStartE2EDuration="1m30.220805578s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:55:34.220335995 +0000 UTC m=+110.747811201" watchObservedRunningTime="2026-01-27 06:55:34.220805578 +0000 UTC m=+110.748280774" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.229956 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.229989 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.230000 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.230016 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.230028 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:34Z","lastTransitionTime":"2026-01-27T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.237328 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-nvjgr" podStartSLOduration=90.237312262 podStartE2EDuration="1m30.237312262s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:55:34.23687087 +0000 UTC m=+110.764346076" watchObservedRunningTime="2026-01-27 06:55:34.237312262 +0000 UTC m=+110.764787458" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.279128 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=40.279108333 podStartE2EDuration="40.279108333s" podCreationTimestamp="2026-01-27 06:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:55:34.27900317 +0000 UTC m=+110.806478366" watchObservedRunningTime="2026-01-27 06:55:34.279108333 +0000 UTC m=+110.806583529" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.298809 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-whdsn" podStartSLOduration=90.298790171 podStartE2EDuration="1m30.298790171s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:55:34.28847654 +0000 UTC m=+110.815951736" watchObservedRunningTime="2026-01-27 06:55:34.298790171 +0000 UTC m=+110.826265367" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.331905 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.331938 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.331947 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.331979 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.331988 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:34Z","lastTransitionTime":"2026-01-27T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.342738 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-tk2w6" podStartSLOduration=90.342721149 podStartE2EDuration="1m30.342721149s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:55:34.329472909 +0000 UTC m=+110.856948125" watchObservedRunningTime="2026-01-27 06:55:34.342721149 +0000 UTC m=+110.870196345" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.358080 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=91.358061663 podStartE2EDuration="1m31.358061663s" podCreationTimestamp="2026-01-27 06:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:55:34.356713667 +0000 UTC m=+110.884188863" watchObservedRunningTime="2026-01-27 06:55:34.358061663 +0000 UTC m=+110.885536859" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.358417 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.358411492 podStartE2EDuration="1m30.358411492s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:55:34.342712709 +0000 UTC m=+110.870187905" watchObservedRunningTime="2026-01-27 06:55:34.358411492 +0000 UTC m=+110.885886688" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.434382 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.434426 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.434437 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.434453 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.434464 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:34Z","lastTransitionTime":"2026-01-27T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.537166 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.537225 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.537237 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.537252 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.537263 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:34Z","lastTransitionTime":"2026-01-27T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.640016 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.640058 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.640068 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.640084 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.640095 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:34Z","lastTransitionTime":"2026-01-27T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.742153 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.742774 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.742921 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.743000 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.743212 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:34Z","lastTransitionTime":"2026-01-27T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.845174 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.845212 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.845222 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.845236 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.845245 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:34Z","lastTransitionTime":"2026-01-27T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.947330 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.947364 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.947373 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.947386 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:34 crc kubenswrapper[4872]: I0127 06:55:34.947396 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:34Z","lastTransitionTime":"2026-01-27T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.049952 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.049983 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.049992 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.050005 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.050015 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:35Z","lastTransitionTime":"2026-01-27T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.097401 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:35 crc kubenswrapper[4872]: E0127 06:55:35.097561 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.116697 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:56:46.032317014 +0000 UTC Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.153157 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.153208 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.153221 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.153236 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.153246 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:35Z","lastTransitionTime":"2026-01-27T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.255465 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.255532 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.255547 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.255562 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.255574 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:35Z","lastTransitionTime":"2026-01-27T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.357235 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.357286 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.357298 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.357315 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.357355 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:35Z","lastTransitionTime":"2026-01-27T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.460031 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.460067 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.460077 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.460092 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.460101 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:35Z","lastTransitionTime":"2026-01-27T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.562957 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.562996 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.563005 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.563018 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.563032 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:35Z","lastTransitionTime":"2026-01-27T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.667081 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.667139 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.667147 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.667162 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.667191 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:35Z","lastTransitionTime":"2026-01-27T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.769219 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.769254 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.769266 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.769282 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.769294 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:35Z","lastTransitionTime":"2026-01-27T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.871764 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.871801 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.871810 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.871829 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.871866 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:35Z","lastTransitionTime":"2026-01-27T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.974099 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.974405 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.974506 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.974607 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:35 crc kubenswrapper[4872]: I0127 06:55:35.974708 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:35Z","lastTransitionTime":"2026-01-27T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.077125 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.077163 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.077173 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.077190 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.077203 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:36Z","lastTransitionTime":"2026-01-27T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.099246 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.099257 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.099394 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:36 crc kubenswrapper[4872]: E0127 06:55:36.099598 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:36 crc kubenswrapper[4872]: E0127 06:55:36.100057 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:36 crc kubenswrapper[4872]: E0127 06:55:36.100532 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.117346 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 13:47:26.419729241 +0000 UTC Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.179719 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.179800 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.179811 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.179826 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.179837 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:36Z","lastTransitionTime":"2026-01-27T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.282223 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.282288 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.282298 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.282313 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.282322 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:36Z","lastTransitionTime":"2026-01-27T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.384909 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.384951 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.384963 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.384979 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.384991 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:36Z","lastTransitionTime":"2026-01-27T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.487125 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.487193 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.487209 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.487228 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.487242 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:36Z","lastTransitionTime":"2026-01-27T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.590057 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.590111 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.590123 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.590136 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.590146 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:36Z","lastTransitionTime":"2026-01-27T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.692477 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.692780 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.692882 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.692992 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.693077 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:36Z","lastTransitionTime":"2026-01-27T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.794608 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.794652 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.794664 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.794679 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.794691 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:36Z","lastTransitionTime":"2026-01-27T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.896668 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.897033 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.897196 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.897305 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.897398 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:36Z","lastTransitionTime":"2026-01-27T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.999543 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.999765 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:36 crc kubenswrapper[4872]: I0127 06:55:36.999901 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.000008 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.000110 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:36Z","lastTransitionTime":"2026-01-27T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.097465 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:37 crc kubenswrapper[4872]: E0127 06:55:37.097712 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.106875 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.106905 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.106915 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.106930 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.106941 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:37Z","lastTransitionTime":"2026-01-27T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.117542 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:06:01.861897481 +0000 UTC Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.208395 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.208432 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.208443 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.208457 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.208468 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:37Z","lastTransitionTime":"2026-01-27T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.310698 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.310734 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.310744 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.310759 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.310771 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:37Z","lastTransitionTime":"2026-01-27T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.413165 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.413204 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.413212 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.413226 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.413237 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:37Z","lastTransitionTime":"2026-01-27T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.515887 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.515927 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.515937 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.515950 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.515961 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:37Z","lastTransitionTime":"2026-01-27T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.618362 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.618412 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.618423 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.618439 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.618450 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:37Z","lastTransitionTime":"2026-01-27T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.720526 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.720565 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.720575 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.720589 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.720598 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:37Z","lastTransitionTime":"2026-01-27T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.823390 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.823426 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.823437 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.823449 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.823458 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:37Z","lastTransitionTime":"2026-01-27T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.925978 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.926032 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.926041 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.926054 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:37 crc kubenswrapper[4872]: I0127 06:55:37.926064 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:37Z","lastTransitionTime":"2026-01-27T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.028229 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.028260 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.028268 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.028282 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.028290 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:38Z","lastTransitionTime":"2026-01-27T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.098057 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.098107 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.098109 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:38 crc kubenswrapper[4872]: E0127 06:55:38.098192 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:38 crc kubenswrapper[4872]: E0127 06:55:38.098264 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:38 crc kubenswrapper[4872]: E0127 06:55:38.098390 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.118288 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:25:18.256610521 +0000 UTC Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.130716 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.130751 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.130760 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.130773 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.130782 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:38Z","lastTransitionTime":"2026-01-27T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.233428 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.233483 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.233492 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.233504 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.233513 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:38Z","lastTransitionTime":"2026-01-27T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.335371 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.335431 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.335442 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.335457 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.335468 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:38Z","lastTransitionTime":"2026-01-27T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.437552 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.437606 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.437631 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.437653 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.437666 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:38Z","lastTransitionTime":"2026-01-27T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.539831 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.539937 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.539958 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.539978 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.539992 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:38Z","lastTransitionTime":"2026-01-27T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.641815 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.641873 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.641885 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.641928 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.641938 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:38Z","lastTransitionTime":"2026-01-27T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.744162 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.744416 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.744484 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.744589 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.744685 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:38Z","lastTransitionTime":"2026-01-27T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.847240 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.847764 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.847836 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.847978 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.848048 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:38Z","lastTransitionTime":"2026-01-27T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.951011 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.951048 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.951061 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.951079 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:38 crc kubenswrapper[4872]: I0127 06:55:38.951091 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:38Z","lastTransitionTime":"2026-01-27T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.056216 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.056293 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.056305 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.056330 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.056342 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:39Z","lastTransitionTime":"2026-01-27T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.097664 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:39 crc kubenswrapper[4872]: E0127 06:55:39.097968 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.098889 4872 scope.go:117] "RemoveContainer" containerID="d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c" Jan 27 06:55:39 crc kubenswrapper[4872]: E0127 06:55:39.099001 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ww8p7_openshift-ovn-kubernetes(b62e2eec-d750-4b03-90a4-4082a5d8ca18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.119026 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 18:49:17.97926771 +0000 UTC Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.158952 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.159172 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.159183 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.159199 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.159212 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:39Z","lastTransitionTime":"2026-01-27T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.261032 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.261361 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.261382 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.261437 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.261449 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:39Z","lastTransitionTime":"2026-01-27T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.363778 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.363809 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.363821 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.363837 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.363864 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:39Z","lastTransitionTime":"2026-01-27T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.465374 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.465414 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.465425 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.465439 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.465450 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:39Z","lastTransitionTime":"2026-01-27T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.567426 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.567459 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.567467 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.567479 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.567487 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:39Z","lastTransitionTime":"2026-01-27T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.669360 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.669399 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.669413 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.669433 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.669445 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:39Z","lastTransitionTime":"2026-01-27T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.759127 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nvjgr_8575a338-fc73-4413-ab05-0fdfdd6bdf2d/kube-multus/1.log" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.759481 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nvjgr_8575a338-fc73-4413-ab05-0fdfdd6bdf2d/kube-multus/0.log" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.759533 4872 generic.go:334] "Generic (PLEG): container finished" podID="8575a338-fc73-4413-ab05-0fdfdd6bdf2d" containerID="07b8c08bb7d393cd9ee35fe3ab28fd559dd83a249f46189803c4b3fc5a4aff20" exitCode=1 Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.759572 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nvjgr" event={"ID":"8575a338-fc73-4413-ab05-0fdfdd6bdf2d","Type":"ContainerDied","Data":"07b8c08bb7d393cd9ee35fe3ab28fd559dd83a249f46189803c4b3fc5a4aff20"} Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.759614 4872 scope.go:117] "RemoveContainer" containerID="573e7dfb5fda8cb5afcac7e0101b52e2847d5c1004a6f317cee8f563770b7f28" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.760020 4872 scope.go:117] "RemoveContainer" containerID="07b8c08bb7d393cd9ee35fe3ab28fd559dd83a249f46189803c4b3fc5a4aff20" Jan 27 06:55:39 crc kubenswrapper[4872]: E0127 06:55:39.762005 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-nvjgr_openshift-multus(8575a338-fc73-4413-ab05-0fdfdd6bdf2d)\"" pod="openshift-multus/multus-nvjgr" podUID="8575a338-fc73-4413-ab05-0fdfdd6bdf2d" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.771805 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.771858 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.771868 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.771883 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.771892 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:39Z","lastTransitionTime":"2026-01-27T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.800497 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=25.800477732 podStartE2EDuration="25.800477732s" podCreationTimestamp="2026-01-27 06:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:55:39.786637248 +0000 UTC m=+116.314112444" watchObservedRunningTime="2026-01-27 06:55:39.800477732 +0000 UTC m=+116.327952928" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.801190 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=67.80118451 podStartE2EDuration="1m7.80118451s" podCreationTimestamp="2026-01-27 06:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:55:39.799670281 +0000 UTC m=+116.327145487" watchObservedRunningTime="2026-01-27 06:55:39.80118451 +0000 UTC m=+116.328659706" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.824056 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-nf5b8" podStartSLOduration=95.824039883 podStartE2EDuration="1m35.824039883s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:55:39.821693921 +0000 UTC m=+116.349169117" watchObservedRunningTime="2026-01-27 06:55:39.824039883 +0000 UTC m=+116.351515069" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.874973 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.875007 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.875017 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.875031 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.875041 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:39Z","lastTransitionTime":"2026-01-27T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.977549 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.977594 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.977606 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.977632 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:39 crc kubenswrapper[4872]: I0127 06:55:39.977646 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:39Z","lastTransitionTime":"2026-01-27T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.080477 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.080533 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.080545 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.080582 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.080593 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:40Z","lastTransitionTime":"2026-01-27T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.097294 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.097317 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:40 crc kubenswrapper[4872]: E0127 06:55:40.097443 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.097532 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:40 crc kubenswrapper[4872]: E0127 06:55:40.097665 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:40 crc kubenswrapper[4872]: E0127 06:55:40.097775 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.119280 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 08:46:30.287168836 +0000 UTC Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.182234 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.182298 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.182312 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.182333 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.182347 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:40Z","lastTransitionTime":"2026-01-27T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.284394 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.284437 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.284445 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.284459 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.284468 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:40Z","lastTransitionTime":"2026-01-27T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.386726 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.386761 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.386772 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.386785 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.386794 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:40Z","lastTransitionTime":"2026-01-27T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.489193 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.489234 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.489243 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.489258 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.489267 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:40Z","lastTransitionTime":"2026-01-27T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.591312 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.591356 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.591365 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.591379 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.591388 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:40Z","lastTransitionTime":"2026-01-27T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.693473 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.693513 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.693521 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.693537 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.693547 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:40Z","lastTransitionTime":"2026-01-27T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.764864 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nvjgr_8575a338-fc73-4413-ab05-0fdfdd6bdf2d/kube-multus/1.log" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.795448 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.795491 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.795504 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.795520 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.795533 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:40Z","lastTransitionTime":"2026-01-27T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.898198 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.898237 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.898247 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.898264 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:40 crc kubenswrapper[4872]: I0127 06:55:40.898275 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:40Z","lastTransitionTime":"2026-01-27T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.000749 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.000802 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.000816 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.000885 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.000899 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:41Z","lastTransitionTime":"2026-01-27T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.097078 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:41 crc kubenswrapper[4872]: E0127 06:55:41.097208 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.104093 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.104132 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.104141 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.104156 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.104167 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:41Z","lastTransitionTime":"2026-01-27T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.119823 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:26:06.976721023 +0000 UTC Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.206463 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.206504 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.206515 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.206533 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.206545 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:41Z","lastTransitionTime":"2026-01-27T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.309060 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.309102 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.309137 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.309156 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.309167 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:41Z","lastTransitionTime":"2026-01-27T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.412522 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.412564 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.412574 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.412588 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.412597 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:41Z","lastTransitionTime":"2026-01-27T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.515073 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.515120 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.515132 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.515151 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.515163 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:41Z","lastTransitionTime":"2026-01-27T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.617741 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.617784 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.617796 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.617815 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.617827 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:41Z","lastTransitionTime":"2026-01-27T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.720508 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.720546 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.720561 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.720577 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.720587 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:41Z","lastTransitionTime":"2026-01-27T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.823512 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.823990 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.824072 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.824174 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.824273 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:41Z","lastTransitionTime":"2026-01-27T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.927521 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.927567 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.927577 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.927598 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:41 crc kubenswrapper[4872]: I0127 06:55:41.927609 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:41Z","lastTransitionTime":"2026-01-27T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.030295 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.030347 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.030356 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.030369 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.030377 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:42Z","lastTransitionTime":"2026-01-27T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.097464 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.097516 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:42 crc kubenswrapper[4872]: E0127 06:55:42.097614 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:42 crc kubenswrapper[4872]: E0127 06:55:42.097758 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.098482 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:42 crc kubenswrapper[4872]: E0127 06:55:42.098894 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.119993 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:32:25.052535982 +0000 UTC Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.132745 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.132786 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.132798 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.132815 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.132827 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:42Z","lastTransitionTime":"2026-01-27T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.235106 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.235152 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.235169 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.235184 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.235196 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:42Z","lastTransitionTime":"2026-01-27T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.245357 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.245384 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.245393 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.245403 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.245411 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:55:42Z","lastTransitionTime":"2026-01-27T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.288467 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq"] Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.288826 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.290809 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.291093 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.291215 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.291308 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.317401 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02edc701-07c3-4908-9174-5669a1b0796b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-k9bkq\" (UID: \"02edc701-07c3-4908-9174-5669a1b0796b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.317468 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02edc701-07c3-4908-9174-5669a1b0796b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-k9bkq\" (UID: \"02edc701-07c3-4908-9174-5669a1b0796b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.317484 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/02edc701-07c3-4908-9174-5669a1b0796b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-k9bkq\" (UID: \"02edc701-07c3-4908-9174-5669a1b0796b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.317534 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/02edc701-07c3-4908-9174-5669a1b0796b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-k9bkq\" (UID: \"02edc701-07c3-4908-9174-5669a1b0796b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.317564 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/02edc701-07c3-4908-9174-5669a1b0796b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-k9bkq\" (UID: \"02edc701-07c3-4908-9174-5669a1b0796b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.418335 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02edc701-07c3-4908-9174-5669a1b0796b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-k9bkq\" (UID: \"02edc701-07c3-4908-9174-5669a1b0796b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.418670 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/02edc701-07c3-4908-9174-5669a1b0796b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-k9bkq\" (UID: \"02edc701-07c3-4908-9174-5669a1b0796b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.419724 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/02edc701-07c3-4908-9174-5669a1b0796b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-k9bkq\" (UID: \"02edc701-07c3-4908-9174-5669a1b0796b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.419895 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/02edc701-07c3-4908-9174-5669a1b0796b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-k9bkq\" (UID: \"02edc701-07c3-4908-9174-5669a1b0796b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.420017 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02edc701-07c3-4908-9174-5669a1b0796b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-k9bkq\" (UID: \"02edc701-07c3-4908-9174-5669a1b0796b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.419635 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/02edc701-07c3-4908-9174-5669a1b0796b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-k9bkq\" (UID: \"02edc701-07c3-4908-9174-5669a1b0796b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.419928 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/02edc701-07c3-4908-9174-5669a1b0796b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-k9bkq\" (UID: \"02edc701-07c3-4908-9174-5669a1b0796b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.419898 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/02edc701-07c3-4908-9174-5669a1b0796b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-k9bkq\" (UID: \"02edc701-07c3-4908-9174-5669a1b0796b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.428949 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02edc701-07c3-4908-9174-5669a1b0796b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-k9bkq\" (UID: \"02edc701-07c3-4908-9174-5669a1b0796b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.433558 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02edc701-07c3-4908-9174-5669a1b0796b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-k9bkq\" (UID: \"02edc701-07c3-4908-9174-5669a1b0796b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.603891 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" Jan 27 06:55:42 crc kubenswrapper[4872]: I0127 06:55:42.770710 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" event={"ID":"02edc701-07c3-4908-9174-5669a1b0796b","Type":"ContainerStarted","Data":"f242bcd3d756a7ab19b897c3bc1480555e5d72c0490d2826016511c885f7a73c"} Jan 27 06:55:43 crc kubenswrapper[4872]: I0127 06:55:43.098144 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:43 crc kubenswrapper[4872]: E0127 06:55:43.098340 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:43 crc kubenswrapper[4872]: I0127 06:55:43.120353 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 01:14:17.133812138 +0000 UTC Jan 27 06:55:43 crc kubenswrapper[4872]: I0127 06:55:43.120398 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 06:55:43 crc kubenswrapper[4872]: I0127 06:55:43.129529 4872 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 06:55:43 crc kubenswrapper[4872]: I0127 06:55:43.780188 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" event={"ID":"02edc701-07c3-4908-9174-5669a1b0796b","Type":"ContainerStarted","Data":"29dc66fad1a0deec3b23fee5855772c81fb9eff6822d6e4586c8bfb3c0401ebd"} Jan 27 06:55:43 crc kubenswrapper[4872]: I0127 06:55:43.800764 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k9bkq" podStartSLOduration=99.800737676 podStartE2EDuration="1m39.800737676s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:55:43.799983465 +0000 UTC m=+120.327458661" watchObservedRunningTime="2026-01-27 06:55:43.800737676 +0000 UTC m=+120.328212872" Jan 27 06:55:44 crc kubenswrapper[4872]: E0127 06:55:44.082402 4872 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 06:55:44 crc kubenswrapper[4872]: I0127 06:55:44.103097 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:44 crc kubenswrapper[4872]: I0127 06:55:44.103165 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:44 crc kubenswrapper[4872]: E0127 06:55:44.103932 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:44 crc kubenswrapper[4872]: I0127 06:55:44.103956 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:44 crc kubenswrapper[4872]: E0127 06:55:44.104018 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:44 crc kubenswrapper[4872]: E0127 06:55:44.104111 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:44 crc kubenswrapper[4872]: E0127 06:55:44.218188 4872 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 06:55:45 crc kubenswrapper[4872]: I0127 06:55:45.097348 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:45 crc kubenswrapper[4872]: E0127 06:55:45.097489 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:46 crc kubenswrapper[4872]: I0127 06:55:46.097385 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:46 crc kubenswrapper[4872]: I0127 06:55:46.097492 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:46 crc kubenswrapper[4872]: E0127 06:55:46.097547 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:46 crc kubenswrapper[4872]: I0127 06:55:46.097583 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:46 crc kubenswrapper[4872]: E0127 06:55:46.097718 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:46 crc kubenswrapper[4872]: E0127 06:55:46.097813 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:47 crc kubenswrapper[4872]: I0127 06:55:47.097200 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:47 crc kubenswrapper[4872]: E0127 06:55:47.097577 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:48 crc kubenswrapper[4872]: I0127 06:55:48.304039 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:48 crc kubenswrapper[4872]: E0127 06:55:48.304167 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:48 crc kubenswrapper[4872]: I0127 06:55:48.304627 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:48 crc kubenswrapper[4872]: E0127 06:55:48.304783 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:48 crc kubenswrapper[4872]: I0127 06:55:48.304809 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:48 crc kubenswrapper[4872]: E0127 06:55:48.305064 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:48 crc kubenswrapper[4872]: I0127 06:55:48.304860 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:48 crc kubenswrapper[4872]: E0127 06:55:48.305289 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:49 crc kubenswrapper[4872]: E0127 06:55:49.219456 4872 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 06:55:50 crc kubenswrapper[4872]: I0127 06:55:50.098131 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:50 crc kubenswrapper[4872]: I0127 06:55:50.098163 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:50 crc kubenswrapper[4872]: I0127 06:55:50.098154 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:50 crc kubenswrapper[4872]: E0127 06:55:50.098271 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:50 crc kubenswrapper[4872]: I0127 06:55:50.098310 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:50 crc kubenswrapper[4872]: E0127 06:55:50.098373 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:50 crc kubenswrapper[4872]: E0127 06:55:50.098456 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:50 crc kubenswrapper[4872]: E0127 06:55:50.098502 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:52 crc kubenswrapper[4872]: I0127 06:55:52.097551 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:52 crc kubenswrapper[4872]: I0127 06:55:52.097693 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:52 crc kubenswrapper[4872]: E0127 06:55:52.097758 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:52 crc kubenswrapper[4872]: E0127 06:55:52.097688 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:52 crc kubenswrapper[4872]: I0127 06:55:52.097577 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:52 crc kubenswrapper[4872]: E0127 06:55:52.097876 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:52 crc kubenswrapper[4872]: I0127 06:55:52.097557 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:52 crc kubenswrapper[4872]: E0127 06:55:52.097946 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:53 crc kubenswrapper[4872]: I0127 06:55:53.098161 4872 scope.go:117] "RemoveContainer" containerID="d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c" Jan 27 06:55:53 crc kubenswrapper[4872]: I0127 06:55:53.813959 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovnkube-controller/3.log" Jan 27 06:55:53 crc kubenswrapper[4872]: I0127 06:55:53.820222 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerStarted","Data":"8bc38bef1b1dcd10d87a35880278719bae48d0b4cf71a91c3224f4828b6ed04d"} Jan 27 06:55:53 crc kubenswrapper[4872]: I0127 06:55:53.820708 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:55:53 crc kubenswrapper[4872]: I0127 06:55:53.846558 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podStartSLOduration=109.846541625 podStartE2EDuration="1m49.846541625s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:55:53.84600349 +0000 UTC m=+130.373478686" watchObservedRunningTime="2026-01-27 06:55:53.846541625 +0000 UTC m=+130.374016821" Jan 27 06:55:53 crc kubenswrapper[4872]: I0127 06:55:53.994516 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nstjz"] Jan 27 06:55:53 crc kubenswrapper[4872]: I0127 06:55:53.994607 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:53 crc kubenswrapper[4872]: E0127 06:55:53.994684 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:54 crc kubenswrapper[4872]: I0127 06:55:54.098014 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:54 crc kubenswrapper[4872]: I0127 06:55:54.098060 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:54 crc kubenswrapper[4872]: I0127 06:55:54.098022 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:54 crc kubenswrapper[4872]: E0127 06:55:54.099046 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:54 crc kubenswrapper[4872]: E0127 06:55:54.099122 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:54 crc kubenswrapper[4872]: E0127 06:55:54.099196 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:54 crc kubenswrapper[4872]: E0127 06:55:54.220280 4872 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 06:55:55 crc kubenswrapper[4872]: I0127 06:55:55.097827 4872 scope.go:117] "RemoveContainer" containerID="07b8c08bb7d393cd9ee35fe3ab28fd559dd83a249f46189803c4b3fc5a4aff20" Jan 27 06:55:55 crc kubenswrapper[4872]: I0127 06:55:55.830790 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nvjgr_8575a338-fc73-4413-ab05-0fdfdd6bdf2d/kube-multus/1.log" Jan 27 06:55:55 crc kubenswrapper[4872]: I0127 06:55:55.830862 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nvjgr" event={"ID":"8575a338-fc73-4413-ab05-0fdfdd6bdf2d","Type":"ContainerStarted","Data":"9d582de30cd4995904c662993a12d753218a97f46976ef04e2bdb771d2ac83bb"} Jan 27 06:55:56 crc kubenswrapper[4872]: I0127 06:55:56.098216 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:56 crc kubenswrapper[4872]: I0127 06:55:56.098248 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:56 crc kubenswrapper[4872]: I0127 06:55:56.098323 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:56 crc kubenswrapper[4872]: I0127 06:55:56.098323 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:56 crc kubenswrapper[4872]: E0127 06:55:56.098384 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:56 crc kubenswrapper[4872]: E0127 06:55:56.098495 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:56 crc kubenswrapper[4872]: E0127 06:55:56.098583 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:55:56 crc kubenswrapper[4872]: E0127 06:55:56.098664 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:58 crc kubenswrapper[4872]: I0127 06:55:58.097896 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:55:58 crc kubenswrapper[4872]: I0127 06:55:58.097937 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:55:58 crc kubenswrapper[4872]: I0127 06:55:58.097937 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:55:58 crc kubenswrapper[4872]: I0127 06:55:58.098003 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:55:58 crc kubenswrapper[4872]: E0127 06:55:58.098007 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nstjz" podUID="f22e033f-46c7-4d57-a333-e1eee5cd3091" Jan 27 06:55:58 crc kubenswrapper[4872]: E0127 06:55:58.098093 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:55:58 crc kubenswrapper[4872]: E0127 06:55:58.098145 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:55:58 crc kubenswrapper[4872]: E0127 06:55:58.098205 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:56:00 crc kubenswrapper[4872]: I0127 06:56:00.098121 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:56:00 crc kubenswrapper[4872]: I0127 06:56:00.098271 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:56:00 crc kubenswrapper[4872]: I0127 06:56:00.098332 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:56:00 crc kubenswrapper[4872]: I0127 06:56:00.098391 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:56:00 crc kubenswrapper[4872]: I0127 06:56:00.102597 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 06:56:00 crc kubenswrapper[4872]: I0127 06:56:00.102597 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 06:56:00 crc kubenswrapper[4872]: I0127 06:56:00.102876 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 06:56:00 crc kubenswrapper[4872]: I0127 06:56:00.102976 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 06:56:00 crc kubenswrapper[4872]: I0127 06:56:00.103662 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 06:56:00 crc kubenswrapper[4872]: I0127 06:56:00.103726 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 06:56:00 crc kubenswrapper[4872]: I0127 06:56:00.911723 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.951421 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.986499 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4"] Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.986966 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.990036 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-ntnst"] Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.990430 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj"] Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.990544 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.991062 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.991550 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jghd5"] Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.992086 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.992636 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr"] Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.993165 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.993891 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-t9txz"] Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.994387 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.995111 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-phgkc"] Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.995457 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-phgkc" Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.996546 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs"] Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.996938 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.998234 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tktwd"] Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.998873 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mmpq8"] Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.998894 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" Jan 27 06:56:02 crc kubenswrapper[4872]: I0127 06:56:02.999665 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.002662 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vvsln"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.002960 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-6plnf"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.003159 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.003333 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6plnf" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.003631 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.004860 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.005440 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.005957 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.006513 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6zmz"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.006987 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6zmz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.007745 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.008248 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.013010 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.013783 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.013947 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.014287 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.014329 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.014547 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.015629 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kp6fd"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.016133 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-kp6fd" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.017650 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rthkh"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.020329 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.020961 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.021233 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.030164 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.030480 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.030772 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.031552 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.031909 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.032224 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.032555 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.032770 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.033188 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.033550 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.034140 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.034301 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.034476 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.034777 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.035540 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.036569 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.038094 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.039343 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.040051 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.040461 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.040479 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.042158 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.043299 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.056757 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.056964 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.057167 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.057265 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.057367 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.057460 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.057542 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.057636 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.057631 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.057777 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.057925 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.057990 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.058137 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.058258 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.058362 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.058473 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.058592 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.058627 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.058699 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.058786 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.059012 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.061577 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.062115 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.062150 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.062123 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.062264 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.062905 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.063061 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.063204 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.063330 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.063376 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.063481 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.063502 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.063606 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.063708 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.063753 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.063867 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.063906 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sslz9"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.063979 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.064102 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.064168 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.064363 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.064385 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.064525 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.064692 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.064792 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.064950 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.065683 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068633 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/86a2fead-ca68-40da-9054-4dd764d24686-etcd-client\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068669 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-27hjj\" (UID: \"c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068689 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a-machine-approver-tls\") pod \"machine-approver-56656f9798-8xlp4\" (UID: \"2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068705 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a2fead-ca68-40da-9054-4dd764d24686-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068720 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a413fe38-46c6-4603-96cc-4667937fe849-node-pullsecrets\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068733 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a413fe38-46c6-4603-96cc-4667937fe849-etcd-serving-ca\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068747 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a413fe38-46c6-4603-96cc-4667937fe849-audit\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068761 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a413fe38-46c6-4603-96cc-4667937fe849-etcd-client\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068782 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cac19a4e-9870-4f41-8fd6-26126ab86c21-service-ca-bundle\") pod \"authentication-operator-69f744f599-t9txz\" (UID: \"cac19a4e-9870-4f41-8fd6-26126ab86c21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068795 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w5hj\" (UniqueName: \"kubernetes.io/projected/c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f-kube-api-access-7w5hj\") pod \"cluster-image-registry-operator-dc59b4c8b-27hjj\" (UID: \"c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068809 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8cfa7f72-9e39-485f-894a-276893a688e1-console-oauth-config\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068825 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a413fe38-46c6-4603-96cc-4667937fe849-audit-dir\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068858 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a-auth-proxy-config\") pod \"machine-approver-56656f9798-8xlp4\" (UID: \"2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068875 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qplxh\" (UniqueName: \"kubernetes.io/projected/a413fe38-46c6-4603-96cc-4667937fe849-kube-api-access-qplxh\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068897 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86a2fead-ca68-40da-9054-4dd764d24686-serving-cert\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068911 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cac19a4e-9870-4f41-8fd6-26126ab86c21-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-t9txz\" (UID: \"cac19a4e-9870-4f41-8fd6-26126ab86c21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068939 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vswkp\" (UniqueName: \"kubernetes.io/projected/2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a-kube-api-access-vswkp\") pod \"machine-approver-56656f9798-8xlp4\" (UID: \"2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068955 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw9lr\" (UniqueName: \"kubernetes.io/projected/fa6b6759-481a-407c-97a8-d85918a467d7-kube-api-access-kw9lr\") pod \"console-operator-58897d9998-phgkc\" (UID: \"fa6b6759-481a-407c-97a8-d85918a467d7\") " pod="openshift-console-operator/console-operator-58897d9998-phgkc" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068969 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a413fe38-46c6-4603-96cc-4667937fe849-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068985 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/86a2fead-ca68-40da-9054-4dd764d24686-encryption-config\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.068999 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-27hjj\" (UID: \"c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069014 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a413fe38-46c6-4603-96cc-4667937fe849-image-import-ca\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069027 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a413fe38-46c6-4603-96cc-4667937fe849-serving-cert\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069042 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa6b6759-481a-407c-97a8-d85918a467d7-serving-cert\") pod \"console-operator-58897d9998-phgkc\" (UID: \"fa6b6759-481a-407c-97a8-d85918a467d7\") " pod="openshift-console-operator/console-operator-58897d9998-phgkc" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069062 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86a2fead-ca68-40da-9054-4dd764d24686-audit-dir\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069076 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-console-config\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069090 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a413fe38-46c6-4603-96cc-4667937fe849-config\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069105 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa6b6759-481a-407c-97a8-d85918a467d7-trusted-ca\") pod \"console-operator-58897d9998-phgkc\" (UID: \"fa6b6759-481a-407c-97a8-d85918a467d7\") " pod="openshift-console-operator/console-operator-58897d9998-phgkc" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069119 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-service-ca\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069132 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69kxc\" (UniqueName: \"kubernetes.io/projected/86a2fead-ca68-40da-9054-4dd764d24686-kube-api-access-69kxc\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069148 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8cfa7f72-9e39-485f-894a-276893a688e1-console-serving-cert\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069163 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8rlb\" (UniqueName: \"kubernetes.io/projected/8cfa7f72-9e39-485f-894a-276893a688e1-kube-api-access-x8rlb\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069179 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac19a4e-9870-4f41-8fd6-26126ab86c21-config\") pod \"authentication-operator-69f744f599-t9txz\" (UID: \"cac19a4e-9870-4f41-8fd6-26126ab86c21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069193 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/86a2fead-ca68-40da-9054-4dd764d24686-audit-policies\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069207 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/86a2fead-ca68-40da-9054-4dd764d24686-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069222 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-trusted-ca-bundle\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069236 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-oauth-serving-cert\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069250 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac19a4e-9870-4f41-8fd6-26126ab86c21-serving-cert\") pod \"authentication-operator-69f744f599-t9txz\" (UID: \"cac19a4e-9870-4f41-8fd6-26126ab86c21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069265 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44n64\" (UniqueName: \"kubernetes.io/projected/cac19a4e-9870-4f41-8fd6-26126ab86c21-kube-api-access-44n64\") pod \"authentication-operator-69f744f599-t9txz\" (UID: \"cac19a4e-9870-4f41-8fd6-26126ab86c21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069279 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a413fe38-46c6-4603-96cc-4667937fe849-encryption-config\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069293 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a-config\") pod \"machine-approver-56656f9798-8xlp4\" (UID: \"2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069307 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa6b6759-481a-407c-97a8-d85918a467d7-config\") pod \"console-operator-58897d9998-phgkc\" (UID: \"fa6b6759-481a-407c-97a8-d85918a467d7\") " pod="openshift-console-operator/console-operator-58897d9998-phgkc" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.069327 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-27hjj\" (UID: \"c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.070271 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-gdn9v"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.070628 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.070740 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.071673 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.075804 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.075945 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.079105 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.079312 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.079439 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.079558 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.080064 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.081625 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.083684 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.086560 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.088052 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.088188 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.088336 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.088554 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.088702 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.089096 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.089919 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.091710 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.091946 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.092315 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.094472 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.094704 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.094958 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.095339 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.095883 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.097020 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ntnst"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.117750 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.120367 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jghd5"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.122371 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.127617 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ml5m6"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.133962 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.140375 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.140964 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.141623 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.141711 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ml5m6" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.144143 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.144596 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.146522 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.147077 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.146671 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.151258 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-62544"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.151891 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.153870 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.154609 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.155320 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-v6d6v"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.155906 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v6d6v" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.157231 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.157695 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.158787 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.159306 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lxtl"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.159706 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.162760 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.163191 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.169364 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dx5vk"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.169755 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-27hjj\" (UID: \"c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.169821 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.169863 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/687ef6ff-0cea-474f-892a-fdca8d9e386e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fnkf9\" (UID: \"687ef6ff-0cea-474f-892a-fdca8d9e386e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.169912 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/86a2fead-ca68-40da-9054-4dd764d24686-etcd-client\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.169939 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/652122b5-378f-48ab-9d77-d30cda97a77d-etcd-ca\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.169960 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.169985 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-27hjj\" (UID: \"c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170002 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a-machine-approver-tls\") pod \"machine-approver-56656f9798-8xlp4\" (UID: \"2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170019 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a2fead-ca68-40da-9054-4dd764d24686-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170040 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r87mn\" (UniqueName: \"kubernetes.io/projected/d1853449-b48b-49ed-aeee-4c0dce155450-kube-api-access-r87mn\") pod \"openshift-apiserver-operator-796bbdcf4f-bw4wb\" (UID: \"d1853449-b48b-49ed-aeee-4c0dce155450\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170057 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a413fe38-46c6-4603-96cc-4667937fe849-node-pullsecrets\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170069 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dx5vk" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170075 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a413fe38-46c6-4603-96cc-4667937fe849-etcd-serving-ca\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170322 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1853449-b48b-49ed-aeee-4c0dce155450-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bw4wb\" (UID: \"d1853449-b48b-49ed-aeee-4c0dce155450\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170344 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170366 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8w2s\" (UniqueName: \"kubernetes.io/projected/687ef6ff-0cea-474f-892a-fdca8d9e386e-kube-api-access-k8w2s\") pod \"openshift-config-operator-7777fb866f-fnkf9\" (UID: \"687ef6ff-0cea-474f-892a-fdca8d9e386e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170396 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a413fe38-46c6-4603-96cc-4667937fe849-audit\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170448 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a413fe38-46c6-4603-96cc-4667937fe849-etcd-client\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170470 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/652122b5-378f-48ab-9d77-d30cda97a77d-etcd-service-ca\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170492 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a31dcc31-ff38-40f9-b26d-fb3757f651c5-audit-dir\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170521 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cac19a4e-9870-4f41-8fd6-26126ab86c21-service-ca-bundle\") pod \"authentication-operator-69f744f599-t9txz\" (UID: \"cac19a4e-9870-4f41-8fd6-26126ab86c21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170540 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ec6eaa6-0516-4dad-8481-ef8bf49935de-config\") pod \"route-controller-manager-6576b87f9c-vs2xs\" (UID: \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170558 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a413fe38-46c6-4603-96cc-4667937fe849-audit-dir\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170577 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w5hj\" (UniqueName: \"kubernetes.io/projected/c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f-kube-api-access-7w5hj\") pod \"cluster-image-registry-operator-dc59b4c8b-27hjj\" (UID: \"c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170596 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8cfa7f72-9e39-485f-894a-276893a688e1-console-oauth-config\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170618 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a-auth-proxy-config\") pod \"machine-approver-56656f9798-8xlp4\" (UID: \"2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170640 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/652122b5-378f-48ab-9d77-d30cda97a77d-etcd-client\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170662 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a413fe38-46c6-4603-96cc-4667937fe849-etcd-serving-ca\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170669 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qplxh\" (UniqueName: \"kubernetes.io/projected/a413fe38-46c6-4603-96cc-4667937fe849-kube-api-access-qplxh\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170691 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86a2fead-ca68-40da-9054-4dd764d24686-serving-cert\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170709 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cac19a4e-9870-4f41-8fd6-26126ab86c21-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-t9txz\" (UID: \"cac19a4e-9870-4f41-8fd6-26126ab86c21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170728 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vswkp\" (UniqueName: \"kubernetes.io/projected/2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a-kube-api-access-vswkp\") pod \"machine-approver-56656f9798-8xlp4\" (UID: \"2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170744 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe5c29a5-1e27-4a9c-8050-ee9c10d2d595-service-ca-bundle\") pod \"router-default-5444994796-gdn9v\" (UID: \"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595\") " pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170761 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4gbj\" (UniqueName: \"kubernetes.io/projected/6d4456cc-8ca4-460c-a793-4c16fa6cbc07-kube-api-access-f4gbj\") pod \"openshift-controller-manager-operator-756b6f6bc6-q5xfh\" (UID: \"6d4456cc-8ca4-460c-a793-4c16fa6cbc07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170783 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw9lr\" (UniqueName: \"kubernetes.io/projected/fa6b6759-481a-407c-97a8-d85918a467d7-kube-api-access-kw9lr\") pod \"console-operator-58897d9998-phgkc\" (UID: \"fa6b6759-481a-407c-97a8-d85918a467d7\") " pod="openshift-console-operator/console-operator-58897d9998-phgkc" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170801 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a413fe38-46c6-4603-96cc-4667937fe849-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170818 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1853449-b48b-49ed-aeee-4c0dce155450-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bw4wb\" (UID: \"d1853449-b48b-49ed-aeee-4c0dce155450\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170835 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8473f532-cab1-4b2b-8402-f4eec9d94bd2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p6zmz\" (UID: \"8473f532-cab1-4b2b-8402-f4eec9d94bd2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6zmz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170875 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d4456cc-8ca4-460c-a793-4c16fa6cbc07-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q5xfh\" (UID: \"6d4456cc-8ca4-460c-a793-4c16fa6cbc07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170893 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/86a2fead-ca68-40da-9054-4dd764d24686-encryption-config\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170912 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-27hjj\" (UID: \"c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170934 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170954 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a413fe38-46c6-4603-96cc-4667937fe849-image-import-ca\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170971 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a413fe38-46c6-4603-96cc-4667937fe849-serving-cert\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.170988 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171005 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9qgd\" (UniqueName: \"kubernetes.io/projected/fecd0f15-a29c-4508-af28-9169b2cf96b7-kube-api-access-t9qgd\") pod \"downloads-7954f5f757-6plnf\" (UID: \"fecd0f15-a29c-4508-af28-9169b2cf96b7\") " pod="openshift-console/downloads-7954f5f757-6plnf" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171032 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa6b6759-481a-407c-97a8-d85918a467d7-serving-cert\") pod \"console-operator-58897d9998-phgkc\" (UID: \"fa6b6759-481a-407c-97a8-d85918a467d7\") " pod="openshift-console-operator/console-operator-58897d9998-phgkc" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171050 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/652122b5-378f-48ab-9d77-d30cda97a77d-serving-cert\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171068 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86a2fead-ca68-40da-9054-4dd764d24686-audit-dir\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171085 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-console-config\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171101 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe5c29a5-1e27-4a9c-8050-ee9c10d2d595-default-certificate\") pod \"router-default-5444994796-gdn9v\" (UID: \"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595\") " pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171119 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe5c29a5-1e27-4a9c-8050-ee9c10d2d595-metrics-certs\") pod \"router-default-5444994796-gdn9v\" (UID: \"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595\") " pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171136 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zln7f\" (UniqueName: \"kubernetes.io/projected/8473f532-cab1-4b2b-8402-f4eec9d94bd2-kube-api-access-zln7f\") pod \"cluster-samples-operator-665b6dd947-p6zmz\" (UID: \"8473f532-cab1-4b2b-8402-f4eec9d94bd2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6zmz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171153 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a413fe38-46c6-4603-96cc-4667937fe849-config\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171171 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171193 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ec6eaa6-0516-4dad-8481-ef8bf49935de-client-ca\") pod \"route-controller-manager-6576b87f9c-vs2xs\" (UID: \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171210 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d4456cc-8ca4-460c-a793-4c16fa6cbc07-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q5xfh\" (UID: \"6d4456cc-8ca4-460c-a793-4c16fa6cbc07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171227 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa6b6759-481a-407c-97a8-d85918a467d7-trusted-ca\") pod \"console-operator-58897d9998-phgkc\" (UID: \"fa6b6759-481a-407c-97a8-d85918a467d7\") " pod="openshift-console-operator/console-operator-58897d9998-phgkc" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171244 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc7ts\" (UniqueName: \"kubernetes.io/projected/a31dcc31-ff38-40f9-b26d-fb3757f651c5-kube-api-access-tc7ts\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171265 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-service-ca\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171282 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndxbn\" (UniqueName: \"kubernetes.io/projected/34692f41-ceea-4bcf-a05d-8b0ceca661df-kube-api-access-ndxbn\") pod \"dns-operator-744455d44c-kp6fd\" (UID: \"34692f41-ceea-4bcf-a05d-8b0ceca661df\") " pod="openshift-dns-operator/dns-operator-744455d44c-kp6fd" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171297 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171312 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171328 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmjqp\" (UniqueName: \"kubernetes.io/projected/2ec6eaa6-0516-4dad-8481-ef8bf49935de-kube-api-access-rmjqp\") pod \"route-controller-manager-6576b87f9c-vs2xs\" (UID: \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171348 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69kxc\" (UniqueName: \"kubernetes.io/projected/86a2fead-ca68-40da-9054-4dd764d24686-kube-api-access-69kxc\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171363 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34692f41-ceea-4bcf-a05d-8b0ceca661df-metrics-tls\") pod \"dns-operator-744455d44c-kp6fd\" (UID: \"34692f41-ceea-4bcf-a05d-8b0ceca661df\") " pod="openshift-dns-operator/dns-operator-744455d44c-kp6fd" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171380 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l42zm\" (UniqueName: \"kubernetes.io/projected/fe5c29a5-1e27-4a9c-8050-ee9c10d2d595-kube-api-access-l42zm\") pod \"router-default-5444994796-gdn9v\" (UID: \"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595\") " pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171398 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171414 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe5c29a5-1e27-4a9c-8050-ee9c10d2d595-stats-auth\") pod \"router-default-5444994796-gdn9v\" (UID: \"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595\") " pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171433 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8cfa7f72-9e39-485f-894a-276893a688e1-console-serving-cert\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171448 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-audit-policies\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171464 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171484 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/86a2fead-ca68-40da-9054-4dd764d24686-audit-policies\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171501 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8rlb\" (UniqueName: \"kubernetes.io/projected/8cfa7f72-9e39-485f-894a-276893a688e1-kube-api-access-x8rlb\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171516 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac19a4e-9870-4f41-8fd6-26126ab86c21-config\") pod \"authentication-operator-69f744f599-t9txz\" (UID: \"cac19a4e-9870-4f41-8fd6-26126ab86c21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171532 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/652122b5-378f-48ab-9d77-d30cda97a77d-config\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171548 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171566 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a413fe38-46c6-4603-96cc-4667937fe849-encryption-config\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171582 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/86a2fead-ca68-40da-9054-4dd764d24686-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171597 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-trusted-ca-bundle\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171613 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-oauth-serving-cert\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171630 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac19a4e-9870-4f41-8fd6-26126ab86c21-serving-cert\") pod \"authentication-operator-69f744f599-t9txz\" (UID: \"cac19a4e-9870-4f41-8fd6-26126ab86c21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171646 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44n64\" (UniqueName: \"kubernetes.io/projected/cac19a4e-9870-4f41-8fd6-26126ab86c21-kube-api-access-44n64\") pod \"authentication-operator-69f744f599-t9txz\" (UID: \"cac19a4e-9870-4f41-8fd6-26126ab86c21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171662 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn59l\" (UniqueName: \"kubernetes.io/projected/652122b5-378f-48ab-9d77-d30cda97a77d-kube-api-access-dn59l\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171679 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/687ef6ff-0cea-474f-892a-fdca8d9e386e-serving-cert\") pod \"openshift-config-operator-7777fb866f-fnkf9\" (UID: \"687ef6ff-0cea-474f-892a-fdca8d9e386e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171695 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec6eaa6-0516-4dad-8481-ef8bf49935de-serving-cert\") pod \"route-controller-manager-6576b87f9c-vs2xs\" (UID: \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171716 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a-config\") pod \"machine-approver-56656f9798-8xlp4\" (UID: \"2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171742 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa6b6759-481a-407c-97a8-d85918a467d7-config\") pod \"console-operator-58897d9998-phgkc\" (UID: \"fa6b6759-481a-407c-97a8-d85918a467d7\") " pod="openshift-console-operator/console-operator-58897d9998-phgkc" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.171839 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-27hjj\" (UID: \"c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.169779 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.172572 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa6b6759-481a-407c-97a8-d85918a467d7-config\") pod \"console-operator-58897d9998-phgkc\" (UID: \"fa6b6759-481a-407c-97a8-d85918a467d7\") " pod="openshift-console-operator/console-operator-58897d9998-phgkc" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.173046 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a413fe38-46c6-4603-96cc-4667937fe849-config\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.173506 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-console-config\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.173965 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/86a2fead-ca68-40da-9054-4dd764d24686-audit-policies\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.174244 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a413fe38-46c6-4603-96cc-4667937fe849-audit\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.174797 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-service-ca\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.175487 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cac19a4e-9870-4f41-8fd6-26126ab86c21-service-ca-bundle\") pod \"authentication-operator-69f744f599-t9txz\" (UID: \"cac19a4e-9870-4f41-8fd6-26126ab86c21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.175559 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a413fe38-46c6-4603-96cc-4667937fe849-audit-dir\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.179240 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.180334 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a-machine-approver-tls\") pod \"machine-approver-56656f9798-8xlp4\" (UID: \"2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.181488 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.181946 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.182348 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.183176 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-27hjj\" (UID: \"c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.183546 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa6b6759-481a-407c-97a8-d85918a467d7-trusted-ca\") pod \"console-operator-58897d9998-phgkc\" (UID: \"fa6b6759-481a-407c-97a8-d85918a467d7\") " pod="openshift-console-operator/console-operator-58897d9998-phgkc" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.184428 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-oauth-serving-cert\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.184912 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac19a4e-9870-4f41-8fd6-26126ab86c21-config\") pod \"authentication-operator-69f744f599-t9txz\" (UID: \"cac19a4e-9870-4f41-8fd6-26126ab86c21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.185458 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.185687 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.189238 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nk52m"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.189665 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tktwd"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.189753 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6zmz"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.189964 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-nk52m" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.193979 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.194068 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sslz9"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.194139 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-t9txz"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.197150 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.197666 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vvsln"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.197757 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6plnf"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.197678 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac19a4e-9870-4f41-8fd6-26126ab86c21-serving-cert\") pod \"authentication-operator-69f744f599-t9txz\" (UID: \"cac19a4e-9870-4f41-8fd6-26126ab86c21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.198150 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.198353 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a-config\") pod \"machine-approver-56656f9798-8xlp4\" (UID: \"2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.199352 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a413fe38-46c6-4603-96cc-4667937fe849-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.199517 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8cfa7f72-9e39-485f-894a-276893a688e1-console-serving-cert\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.200760 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a413fe38-46c6-4603-96cc-4667937fe849-image-import-ca\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.203986 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a-auth-proxy-config\") pod \"machine-approver-56656f9798-8xlp4\" (UID: \"2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.205915 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a413fe38-46c6-4603-96cc-4667937fe849-etcd-client\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.206073 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86a2fead-ca68-40da-9054-4dd764d24686-audit-dir\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.206309 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cac19a4e-9870-4f41-8fd6-26126ab86c21-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-t9txz\" (UID: \"cac19a4e-9870-4f41-8fd6-26126ab86c21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.206652 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.206736 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.207127 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a2fead-ca68-40da-9054-4dd764d24686-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.207528 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/86a2fead-ca68-40da-9054-4dd764d24686-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.207562 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a413fe38-46c6-4603-96cc-4667937fe849-node-pullsecrets\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.209213 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-trusted-ca-bundle\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.210358 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-n6d8q"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.210924 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6k8gx"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.211232 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.211696 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.211796 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.212053 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-n6d8q" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.212150 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.212646 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.213448 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.214201 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa6b6759-481a-407c-97a8-d85918a467d7-serving-cert\") pod \"console-operator-58897d9998-phgkc\" (UID: \"fa6b6759-481a-407c-97a8-d85918a467d7\") " pod="openshift-console-operator/console-operator-58897d9998-phgkc" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.216971 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.232041 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a413fe38-46c6-4603-96cc-4667937fe849-encryption-config\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.232393 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/86a2fead-ca68-40da-9054-4dd764d24686-encryption-config\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.232636 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/86a2fead-ca68-40da-9054-4dd764d24686-etcd-client\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.233249 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86a2fead-ca68-40da-9054-4dd764d24686-serving-cert\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.233282 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.233543 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a413fe38-46c6-4603-96cc-4667937fe849-serving-cert\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.233776 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8cfa7f72-9e39-485f-894a-276893a688e1-console-oauth-config\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.235524 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.239122 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-62544"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.244648 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.248931 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kp6fd"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.250746 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.252227 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-v6d6v"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.254443 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.256682 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.263225 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mmpq8"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.264929 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.268425 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-nqgbm"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.268751 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.269184 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nqgbm" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.270294 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rthkh"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.271754 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-vlcpq"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272347 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272381 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/687ef6ff-0cea-474f-892a-fdca8d9e386e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fnkf9\" (UID: \"687ef6ff-0cea-474f-892a-fdca8d9e386e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272418 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/652122b5-378f-48ab-9d77-d30cda97a77d-etcd-ca\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272441 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272473 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r87mn\" (UniqueName: \"kubernetes.io/projected/d1853449-b48b-49ed-aeee-4c0dce155450-kube-api-access-r87mn\") pod \"openshift-apiserver-operator-796bbdcf4f-bw4wb\" (UID: \"d1853449-b48b-49ed-aeee-4c0dce155450\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272496 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1853449-b48b-49ed-aeee-4c0dce155450-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bw4wb\" (UID: \"d1853449-b48b-49ed-aeee-4c0dce155450\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272519 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272536 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8w2s\" (UniqueName: \"kubernetes.io/projected/687ef6ff-0cea-474f-892a-fdca8d9e386e-kube-api-access-k8w2s\") pod \"openshift-config-operator-7777fb866f-fnkf9\" (UID: \"687ef6ff-0cea-474f-892a-fdca8d9e386e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272553 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/652122b5-378f-48ab-9d77-d30cda97a77d-etcd-service-ca\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272571 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a31dcc31-ff38-40f9-b26d-fb3757f651c5-audit-dir\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272586 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ec6eaa6-0516-4dad-8481-ef8bf49935de-config\") pod \"route-controller-manager-6576b87f9c-vs2xs\" (UID: \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272613 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/652122b5-378f-48ab-9d77-d30cda97a77d-etcd-client\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272645 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe5c29a5-1e27-4a9c-8050-ee9c10d2d595-service-ca-bundle\") pod \"router-default-5444994796-gdn9v\" (UID: \"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595\") " pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272666 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4gbj\" (UniqueName: \"kubernetes.io/projected/6d4456cc-8ca4-460c-a793-4c16fa6cbc07-kube-api-access-f4gbj\") pod \"openshift-controller-manager-operator-756b6f6bc6-q5xfh\" (UID: \"6d4456cc-8ca4-460c-a793-4c16fa6cbc07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272693 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1853449-b48b-49ed-aeee-4c0dce155450-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bw4wb\" (UID: \"d1853449-b48b-49ed-aeee-4c0dce155450\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272708 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8473f532-cab1-4b2b-8402-f4eec9d94bd2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p6zmz\" (UID: \"8473f532-cab1-4b2b-8402-f4eec9d94bd2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6zmz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272722 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d4456cc-8ca4-460c-a793-4c16fa6cbc07-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q5xfh\" (UID: \"6d4456cc-8ca4-460c-a793-4c16fa6cbc07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272781 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272799 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272815 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9qgd\" (UniqueName: \"kubernetes.io/projected/fecd0f15-a29c-4508-af28-9169b2cf96b7-kube-api-access-t9qgd\") pod \"downloads-7954f5f757-6plnf\" (UID: \"fecd0f15-a29c-4508-af28-9169b2cf96b7\") " pod="openshift-console/downloads-7954f5f757-6plnf" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272853 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/652122b5-378f-48ab-9d77-d30cda97a77d-serving-cert\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272870 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe5c29a5-1e27-4a9c-8050-ee9c10d2d595-default-certificate\") pod \"router-default-5444994796-gdn9v\" (UID: \"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595\") " pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272884 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe5c29a5-1e27-4a9c-8050-ee9c10d2d595-metrics-certs\") pod \"router-default-5444994796-gdn9v\" (UID: \"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595\") " pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272898 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zln7f\" (UniqueName: \"kubernetes.io/projected/8473f532-cab1-4b2b-8402-f4eec9d94bd2-kube-api-access-zln7f\") pod \"cluster-samples-operator-665b6dd947-p6zmz\" (UID: \"8473f532-cab1-4b2b-8402-f4eec9d94bd2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6zmz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272913 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272929 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ec6eaa6-0516-4dad-8481-ef8bf49935de-client-ca\") pod \"route-controller-manager-6576b87f9c-vs2xs\" (UID: \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272945 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d4456cc-8ca4-460c-a793-4c16fa6cbc07-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q5xfh\" (UID: \"6d4456cc-8ca4-460c-a793-4c16fa6cbc07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272961 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc7ts\" (UniqueName: \"kubernetes.io/projected/a31dcc31-ff38-40f9-b26d-fb3757f651c5-kube-api-access-tc7ts\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272978 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndxbn\" (UniqueName: \"kubernetes.io/projected/34692f41-ceea-4bcf-a05d-8b0ceca661df-kube-api-access-ndxbn\") pod \"dns-operator-744455d44c-kp6fd\" (UID: \"34692f41-ceea-4bcf-a05d-8b0ceca661df\") " pod="openshift-dns-operator/dns-operator-744455d44c-kp6fd" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.272993 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.273011 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.273027 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmjqp\" (UniqueName: \"kubernetes.io/projected/2ec6eaa6-0516-4dad-8481-ef8bf49935de-kube-api-access-rmjqp\") pod \"route-controller-manager-6576b87f9c-vs2xs\" (UID: \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.273049 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34692f41-ceea-4bcf-a05d-8b0ceca661df-metrics-tls\") pod \"dns-operator-744455d44c-kp6fd\" (UID: \"34692f41-ceea-4bcf-a05d-8b0ceca661df\") " pod="openshift-dns-operator/dns-operator-744455d44c-kp6fd" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.273065 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l42zm\" (UniqueName: \"kubernetes.io/projected/fe5c29a5-1e27-4a9c-8050-ee9c10d2d595-kube-api-access-l42zm\") pod \"router-default-5444994796-gdn9v\" (UID: \"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595\") " pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.273083 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.273093 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/687ef6ff-0cea-474f-892a-fdca8d9e386e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fnkf9\" (UID: \"687ef6ff-0cea-474f-892a-fdca8d9e386e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.273101 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe5c29a5-1e27-4a9c-8050-ee9c10d2d595-stats-auth\") pod \"router-default-5444994796-gdn9v\" (UID: \"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595\") " pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.273183 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-audit-policies\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.273212 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.273252 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/652122b5-378f-48ab-9d77-d30cda97a77d-config\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.273296 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.273335 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn59l\" (UniqueName: \"kubernetes.io/projected/652122b5-378f-48ab-9d77-d30cda97a77d-kube-api-access-dn59l\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.273359 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec6eaa6-0516-4dad-8481-ef8bf49935de-serving-cert\") pod \"route-controller-manager-6576b87f9c-vs2xs\" (UID: \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.273552 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/687ef6ff-0cea-474f-892a-fdca8d9e386e-serving-cert\") pod \"openshift-config-operator-7777fb866f-fnkf9\" (UID: \"687ef6ff-0cea-474f-892a-fdca8d9e386e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.273996 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-audit-policies\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.277086 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.277265 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vlcpq" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.273417 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.279400 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a31dcc31-ff38-40f9-b26d-fb3757f651c5-audit-dir\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.279879 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6k8gx"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.279945 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ec6eaa6-0516-4dad-8481-ef8bf49935de-client-ca\") pod \"route-controller-manager-6576b87f9c-vs2xs\" (UID: \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.280255 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d4456cc-8ca4-460c-a793-4c16fa6cbc07-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q5xfh\" (UID: \"6d4456cc-8ca4-460c-a793-4c16fa6cbc07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.280477 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.280634 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/687ef6ff-0cea-474f-892a-fdca8d9e386e-serving-cert\") pod \"openshift-config-operator-7777fb866f-fnkf9\" (UID: \"687ef6ff-0cea-474f-892a-fdca8d9e386e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.281210 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1853449-b48b-49ed-aeee-4c0dce155450-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bw4wb\" (UID: \"d1853449-b48b-49ed-aeee-4c0dce155450\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.282935 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.283400 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ec6eaa6-0516-4dad-8481-ef8bf49935de-config\") pod \"route-controller-manager-6576b87f9c-vs2xs\" (UID: \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.283516 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.283669 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1853449-b48b-49ed-aeee-4c0dce155450-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bw4wb\" (UID: \"d1853449-b48b-49ed-aeee-4c0dce155450\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.283924 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.284289 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.284290 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.284357 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec6eaa6-0516-4dad-8481-ef8bf49935de-serving-cert\") pod \"route-controller-manager-6576b87f9c-vs2xs\" (UID: \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.284569 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.284913 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.286030 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.286029 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d4456cc-8ca4-460c-a793-4c16fa6cbc07-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q5xfh\" (UID: \"6d4456cc-8ca4-460c-a793-4c16fa6cbc07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.286157 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34692f41-ceea-4bcf-a05d-8b0ceca661df-metrics-tls\") pod \"dns-operator-744455d44c-kp6fd\" (UID: \"34692f41-ceea-4bcf-a05d-8b0ceca661df\") " pod="openshift-dns-operator/dns-operator-744455d44c-kp6fd" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.286749 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.288250 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8473f532-cab1-4b2b-8402-f4eec9d94bd2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p6zmz\" (UID: \"8473f532-cab1-4b2b-8402-f4eec9d94bd2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6zmz" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.288451 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-phgkc"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.289546 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.289833 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.294022 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dx5vk"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.294130 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-n6d8q"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.294210 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lxtl"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.296896 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ml5m6"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.296954 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.297943 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/652122b5-378f-48ab-9d77-d30cda97a77d-config\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.300106 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.300163 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nqgbm"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.300395 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nk52m"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.309518 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.311386 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz"] Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.314418 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.314756 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/652122b5-378f-48ab-9d77-d30cda97a77d-serving-cert\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.328535 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.334508 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/652122b5-378f-48ab-9d77-d30cda97a77d-etcd-client\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.349202 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.350722 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/652122b5-378f-48ab-9d77-d30cda97a77d-etcd-service-ca\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.369353 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.373910 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/652122b5-378f-48ab-9d77-d30cda97a77d-etcd-ca\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.388928 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.408904 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.429347 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.442337 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe5c29a5-1e27-4a9c-8050-ee9c10d2d595-metrics-certs\") pod \"router-default-5444994796-gdn9v\" (UID: \"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595\") " pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.457325 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.469610 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.481855 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe5c29a5-1e27-4a9c-8050-ee9c10d2d595-default-certificate\") pod \"router-default-5444994796-gdn9v\" (UID: \"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595\") " pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.489334 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.496751 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe5c29a5-1e27-4a9c-8050-ee9c10d2d595-stats-auth\") pod \"router-default-5444994796-gdn9v\" (UID: \"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595\") " pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.509020 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.528832 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.550107 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.571014 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.588951 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.593475 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe5c29a5-1e27-4a9c-8050-ee9c10d2d595-service-ca-bundle\") pod \"router-default-5444994796-gdn9v\" (UID: \"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595\") " pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.609296 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.629580 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.648724 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.689230 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.709612 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.729875 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.749763 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.769406 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.789779 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.809013 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.829244 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.849421 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.869694 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.888590 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.908479 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.929112 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.948527 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.968969 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 06:56:03 crc kubenswrapper[4872]: I0127 06:56:03.989570 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.009218 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.028883 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.049484 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.068598 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.089124 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.109382 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.129078 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.148818 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.168031 4872 request.go:700] Waited for 1.010066872s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&limit=500&resourceVersion=0 Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.169932 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.189591 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.209277 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.228655 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.249061 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.269000 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.296893 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.309265 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.329136 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.349569 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.369370 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.389080 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.409426 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.430465 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.450302 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.483528 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69kxc\" (UniqueName: \"kubernetes.io/projected/86a2fead-ca68-40da-9054-4dd764d24686-kube-api-access-69kxc\") pod \"apiserver-7bbb656c7d-8pnkr\" (UID: \"86a2fead-ca68-40da-9054-4dd764d24686\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.505667 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w5hj\" (UniqueName: \"kubernetes.io/projected/c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f-kube-api-access-7w5hj\") pod \"cluster-image-registry-operator-dc59b4c8b-27hjj\" (UID: \"c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.508977 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.543409 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8rlb\" (UniqueName: \"kubernetes.io/projected/8cfa7f72-9e39-485f-894a-276893a688e1-kube-api-access-x8rlb\") pod \"console-f9d7485db-ntnst\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.549509 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.569504 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.590389 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.608925 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.619533 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.629134 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.649932 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.663139 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.669659 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.689653 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.729127 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44n64\" (UniqueName: \"kubernetes.io/projected/cac19a4e-9870-4f41-8fd6-26126ab86c21-kube-api-access-44n64\") pod \"authentication-operator-69f744f599-t9txz\" (UID: \"cac19a4e-9870-4f41-8fd6-26126ab86c21\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.730425 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.772656 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw9lr\" (UniqueName: \"kubernetes.io/projected/fa6b6759-481a-407c-97a8-d85918a467d7-kube-api-access-kw9lr\") pod \"console-operator-58897d9998-phgkc\" (UID: \"fa6b6759-481a-407c-97a8-d85918a467d7\") " pod="openshift-console-operator/console-operator-58897d9998-phgkc" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.786081 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qplxh\" (UniqueName: \"kubernetes.io/projected/a413fe38-46c6-4603-96cc-4667937fe849-kube-api-access-qplxh\") pod \"apiserver-76f77b778f-jghd5\" (UID: \"a413fe38-46c6-4603-96cc-4667937fe849\") " pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.804331 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vswkp\" (UniqueName: \"kubernetes.io/projected/2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a-kube-api-access-vswkp\") pod \"machine-approver-56656f9798-8xlp4\" (UID: \"2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.813503 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.835111 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.839678 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-27hjj\" (UID: \"c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.849463 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.855019 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" event={"ID":"2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a","Type":"ContainerStarted","Data":"3bea9e714e2be51920f4c9bdf62bc7ff1d06625122f43c26dc52c4d10f882371"} Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.858661 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.863772 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr"] Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.869021 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.889961 4872 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.909709 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.930163 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.936055 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.951052 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.970113 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.983823 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.991461 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 06:56:04 crc kubenswrapper[4872]: I0127 06:56:04.993892 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ntnst"] Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.009485 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.029386 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.050824 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-phgkc" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.051497 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj"] Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.069765 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.090307 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.111141 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.131960 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.159147 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jghd5"] Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.168689 4872 request.go:700] Waited for 1.892124135s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/default/token Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.173697 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn59l\" (UniqueName: \"kubernetes.io/projected/652122b5-378f-48ab-9d77-d30cda97a77d-kube-api-access-dn59l\") pod \"etcd-operator-b45778765-sslz9\" (UID: \"652122b5-378f-48ab-9d77-d30cda97a77d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.188028 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.217431 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.220290 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9qgd\" (UniqueName: \"kubernetes.io/projected/fecd0f15-a29c-4508-af28-9169b2cf96b7-kube-api-access-t9qgd\") pod \"downloads-7954f5f757-6plnf\" (UID: \"fecd0f15-a29c-4508-af28-9169b2cf96b7\") " pod="openshift-console/downloads-7954f5f757-6plnf" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.221112 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r87mn\" (UniqueName: \"kubernetes.io/projected/d1853449-b48b-49ed-aeee-4c0dce155450-kube-api-access-r87mn\") pod \"openshift-apiserver-operator-796bbdcf4f-bw4wb\" (UID: \"d1853449-b48b-49ed-aeee-4c0dce155450\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.237688 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.248550 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.291701 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmjqp\" (UniqueName: \"kubernetes.io/projected/2ec6eaa6-0516-4dad-8481-ef8bf49935de-kube-api-access-rmjqp\") pod \"route-controller-manager-6576b87f9c-vs2xs\" (UID: \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.305718 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndxbn\" (UniqueName: \"kubernetes.io/projected/34692f41-ceea-4bcf-a05d-8b0ceca661df-kube-api-access-ndxbn\") pod \"dns-operator-744455d44c-kp6fd\" (UID: \"34692f41-ceea-4bcf-a05d-8b0ceca661df\") " pod="openshift-dns-operator/dns-operator-744455d44c-kp6fd" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.327051 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc7ts\" (UniqueName: \"kubernetes.io/projected/a31dcc31-ff38-40f9-b26d-fb3757f651c5-kube-api-access-tc7ts\") pod \"oauth-openshift-558db77b4-vvsln\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.353187 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zln7f\" (UniqueName: \"kubernetes.io/projected/8473f532-cab1-4b2b-8402-f4eec9d94bd2-kube-api-access-zln7f\") pod \"cluster-samples-operator-665b6dd947-p6zmz\" (UID: \"8473f532-cab1-4b2b-8402-f4eec9d94bd2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6zmz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.358624 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-phgkc"] Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.365047 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l42zm\" (UniqueName: \"kubernetes.io/projected/fe5c29a5-1e27-4a9c-8050-ee9c10d2d595-kube-api-access-l42zm\") pod \"router-default-5444994796-gdn9v\" (UID: \"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595\") " pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.368349 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.387464 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8w2s\" (UniqueName: \"kubernetes.io/projected/687ef6ff-0cea-474f-892a-fdca8d9e386e-kube-api-access-k8w2s\") pod \"openshift-config-operator-7777fb866f-fnkf9\" (UID: \"687ef6ff-0cea-474f-892a-fdca8d9e386e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.413574 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4gbj\" (UniqueName: \"kubernetes.io/projected/6d4456cc-8ca4-460c-a793-4c16fa6cbc07-kube-api-access-f4gbj\") pod \"openshift-controller-manager-operator-756b6f6bc6-q5xfh\" (UID: \"6d4456cc-8ca4-460c-a793-4c16fa6cbc07\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh" Jan 27 06:56:05 crc kubenswrapper[4872]: W0127 06:56:05.419420 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa6b6759_481a_407c_97a8_d85918a467d7.slice/crio-cf2881e95aec9776f51dc70fc644a711afd2a4c9fece7db99729905b9a844a5d WatchSource:0}: Error finding container cf2881e95aec9776f51dc70fc644a711afd2a4c9fece7db99729905b9a844a5d: Status 404 returned error can't find the container with id cf2881e95aec9776f51dc70fc644a711afd2a4c9fece7db99729905b9a844a5d Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.425280 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6plnf" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.432013 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.442232 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.447268 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.447441 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sslz9"] Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.456072 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6zmz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.464378 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.469405 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-kp6fd" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.482424 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-t9txz"] Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.492241 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.501051 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c17752d4-ad64-445d-882f-134f79928b40-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.501150 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c17752d4-ad64-445d-882f-134f79928b40-trusted-ca\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.501173 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87c928c3-3313-4bb8-afe7-9438975f6f51-trusted-ca\") pod \"ingress-operator-5b745b69d9-b7zbz\" (UID: \"87c928c3-3313-4bb8-afe7-9438975f6f51\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.501218 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d579a88b-344f-4198-8070-bc7a7b73bbaf-config\") pod \"machine-api-operator-5694c8668f-tktwd\" (UID: \"d579a88b-344f-4198-8070-bc7a7b73bbaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.501307 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d579a88b-344f-4198-8070-bc7a7b73bbaf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tktwd\" (UID: \"d579a88b-344f-4198-8070-bc7a7b73bbaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.501331 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mmpq8\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.501762 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-bound-sa-token\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.501801 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2h8t\" (UniqueName: \"kubernetes.io/projected/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-kube-api-access-h2h8t\") pod \"controller-manager-879f6c89f-mmpq8\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.501817 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87c928c3-3313-4bb8-afe7-9438975f6f51-bound-sa-token\") pod \"ingress-operator-5b745b69d9-b7zbz\" (UID: \"87c928c3-3313-4bb8-afe7-9438975f6f51\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.501982 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.502142 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-client-ca\") pod \"controller-manager-879f6c89f-mmpq8\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.502295 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-serving-cert\") pod \"controller-manager-879f6c89f-mmpq8\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.502361 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c17752d4-ad64-445d-882f-134f79928b40-registry-certificates\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.502386 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-config\") pod \"controller-manager-879f6c89f-mmpq8\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.502407 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c17752d4-ad64-445d-882f-134f79928b40-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: E0127 06:56:05.502469 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:06.002456536 +0000 UTC m=+142.529931732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.502752 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-registry-tls\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.502803 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbtg7\" (UniqueName: \"kubernetes.io/projected/87c928c3-3313-4bb8-afe7-9438975f6f51-kube-api-access-zbtg7\") pod \"ingress-operator-5b745b69d9-b7zbz\" (UID: \"87c928c3-3313-4bb8-afe7-9438975f6f51\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.502890 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d579a88b-344f-4198-8070-bc7a7b73bbaf-images\") pod \"machine-api-operator-5694c8668f-tktwd\" (UID: \"d579a88b-344f-4198-8070-bc7a7b73bbaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.502908 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj4rj\" (UniqueName: \"kubernetes.io/projected/d579a88b-344f-4198-8070-bc7a7b73bbaf-kube-api-access-nj4rj\") pod \"machine-api-operator-5694c8668f-tktwd\" (UID: \"d579a88b-344f-4198-8070-bc7a7b73bbaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.502936 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btmdk\" (UniqueName: \"kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-kube-api-access-btmdk\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.502968 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87c928c3-3313-4bb8-afe7-9438975f6f51-metrics-tls\") pod \"ingress-operator-5b745b69d9-b7zbz\" (UID: \"87c928c3-3313-4bb8-afe7-9438975f6f51\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" Jan 27 06:56:05 crc kubenswrapper[4872]: W0127 06:56:05.504579 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcac19a4e_9870_4f41_8fd6_26126ab86c21.slice/crio-2ac835c8e657a851d85dd93d210848bf4109a80df18ecf65aeed0c6d593e1c70 WatchSource:0}: Error finding container 2ac835c8e657a851d85dd93d210848bf4109a80df18ecf65aeed0c6d593e1c70: Status 404 returned error can't find the container with id 2ac835c8e657a851d85dd93d210848bf4109a80df18ecf65aeed0c6d593e1c70 Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.603567 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:05 crc kubenswrapper[4872]: E0127 06:56:05.604246 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:06.104225856 +0000 UTC m=+142.631701052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.604580 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/374ad670-12b0-4842-a6ca-1cbf355b5a99-config-volume\") pod \"collect-profiles-29491605-gwlv7\" (UID: \"374ad670-12b0-4842-a6ca-1cbf355b5a99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.604630 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa0d3672-506e-417a-8528-ba9e7ea8a2ba-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dx5vk\" (UID: \"aa0d3672-506e-417a-8528-ba9e7ea8a2ba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dx5vk" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.604661 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87c928c3-3313-4bb8-afe7-9438975f6f51-bound-sa-token\") pod \"ingress-operator-5b745b69d9-b7zbz\" (UID: \"87c928c3-3313-4bb8-afe7-9438975f6f51\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.604724 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfedc4d3-7528-4088-ad69-171d2ec1ba14-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qd9g7\" (UID: \"cfedc4d3-7528-4088-ad69-171d2ec1ba14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.604770 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdsgn\" (UniqueName: \"kubernetes.io/projected/4b124376-3a32-4edb-b447-70fd2bd56e47-kube-api-access-wdsgn\") pod \"olm-operator-6b444d44fb-888dw\" (UID: \"4b124376-3a32-4edb-b447-70fd2bd56e47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.604792 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db57d95-3aeb-4d52-81fe-d87a474adb7b-config\") pod \"kube-controller-manager-operator-78b949d7b-lqbck\" (UID: \"9db57d95-3aeb-4d52-81fe-d87a474adb7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.604822 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-config\") pod \"controller-manager-879f6c89f-mmpq8\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.604892 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xs6x\" (UniqueName: \"kubernetes.io/projected/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-kube-api-access-6xs6x\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.604914 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f88d5\" (UniqueName: \"kubernetes.io/projected/aa0d3672-506e-417a-8528-ba9e7ea8a2ba-kube-api-access-f88d5\") pod \"control-plane-machine-set-operator-78cbb6b69f-dx5vk\" (UID: \"aa0d3672-506e-417a-8528-ba9e7ea8a2ba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dx5vk" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.604975 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlhc9\" (UniqueName: \"kubernetes.io/projected/9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee-kube-api-access-tlhc9\") pod \"machine-config-operator-74547568cd-62544\" (UID: \"9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.604992 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d3bd541e-8647-49ce-be1c-85f6e5c90f86-metrics-tls\") pod \"dns-default-n6d8q\" (UID: \"d3bd541e-8647-49ce-be1c-85f6e5c90f86\") " pod="openshift-dns/dns-default-n6d8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605008 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-registry-tls\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605041 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/374ad670-12b0-4842-a6ca-1cbf355b5a99-secret-volume\") pod \"collect-profiles-29491605-gwlv7\" (UID: \"374ad670-12b0-4842-a6ca-1cbf355b5a99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605075 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65eec553-40f9-421b-bc5b-fd94cfdc3eee-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dn2vc\" (UID: \"65eec553-40f9-421b-bc5b-fd94cfdc3eee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605110 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5d73faa6-98ab-4066-bf36-1f4a7609fd92-signing-cabundle\") pod \"service-ca-9c57cc56f-nk52m\" (UID: \"5d73faa6-98ab-4066-bf36-1f4a7609fd92\") " pod="openshift-service-ca/service-ca-9c57cc56f-nk52m" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605204 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqh77\" (UniqueName: \"kubernetes.io/projected/031efb47-2cb0-4323-896b-67cce30690ae-kube-api-access-jqh77\") pod \"machine-config-server-vlcpq\" (UID: \"031efb47-2cb0-4323-896b-67cce30690ae\") " pod="openshift-machine-config-operator/machine-config-server-vlcpq" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605230 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8587651-0242-4cd8-b60d-1551b4908dfe-proxy-tls\") pod \"machine-config-controller-84d6567774-w5f8q\" (UID: \"a8587651-0242-4cd8-b60d-1551b4908dfe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605277 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9db57d95-3aeb-4d52-81fe-d87a474adb7b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lqbck\" (UID: \"9db57d95-3aeb-4d52-81fe-d87a474adb7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605299 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzxzj\" (UniqueName: \"kubernetes.io/projected/7b17ad7a-bf74-4ffd-ae08-f02802fdaa22-kube-api-access-tzxzj\") pod \"ingress-canary-nqgbm\" (UID: \"7b17ad7a-bf74-4ffd-ae08-f02802fdaa22\") " pod="openshift-ingress-canary/ingress-canary-nqgbm" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605377 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee-images\") pod \"machine-config-operator-74547568cd-62544\" (UID: \"9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605400 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9chd\" (UniqueName: \"kubernetes.io/projected/3f173d12-704f-4431-aa56-05841c81d146-kube-api-access-s9chd\") pod \"marketplace-operator-79b997595-4lxtl\" (UID: \"3f173d12-704f-4431-aa56-05841c81d146\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605467 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btmdk\" (UniqueName: \"kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-kube-api-access-btmdk\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605555 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvwkq\" (UniqueName: \"kubernetes.io/projected/5d73faa6-98ab-4066-bf36-1f4a7609fd92-kube-api-access-wvwkq\") pod \"service-ca-9c57cc56f-nk52m\" (UID: \"5d73faa6-98ab-4066-bf36-1f4a7609fd92\") " pod="openshift-service-ca/service-ca-9c57cc56f-nk52m" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605595 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/36c9a520-9452-469d-b65e-635f7ea74105-srv-cert\") pod \"catalog-operator-68c6474976-qlql6\" (UID: \"36c9a520-9452-469d-b65e-635f7ea74105\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605640 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6221868d-3674-4d0f-9796-29338e188d50-apiservice-cert\") pod \"packageserver-d55dfcdfc-76vrz\" (UID: \"6221868d-3674-4d0f-9796-29338e188d50\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605679 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c17752d4-ad64-445d-882f-134f79928b40-trusted-ca\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605748 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrvph\" (UniqueName: \"kubernetes.io/projected/a8587651-0242-4cd8-b60d-1551b4908dfe-kube-api-access-vrvph\") pod \"machine-config-controller-84d6567774-w5f8q\" (UID: \"a8587651-0242-4cd8-b60d-1551b4908dfe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605769 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-mountpoint-dir\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605784 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/031efb47-2cb0-4323-896b-67cce30690ae-certs\") pod \"machine-config-server-vlcpq\" (UID: \"031efb47-2cb0-4323-896b-67cce30690ae\") " pod="openshift-machine-config-operator/machine-config-server-vlcpq" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605798 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv7xf\" (UniqueName: \"kubernetes.io/projected/d3bd541e-8647-49ce-be1c-85f6e5c90f86-kube-api-access-cv7xf\") pod \"dns-default-n6d8q\" (UID: \"d3bd541e-8647-49ce-be1c-85f6e5c90f86\") " pod="openshift-dns/dns-default-n6d8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605829 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/36c9a520-9452-469d-b65e-635f7ea74105-profile-collector-cert\") pod \"catalog-operator-68c6474976-qlql6\" (UID: \"36c9a520-9452-469d-b65e-635f7ea74105\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605872 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d579a88b-344f-4198-8070-bc7a7b73bbaf-config\") pod \"machine-api-operator-5694c8668f-tktwd\" (UID: \"d579a88b-344f-4198-8070-bc7a7b73bbaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605899 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr4bb\" (UniqueName: \"kubernetes.io/projected/def7d421-8c6f-4694-96ed-ac9beaeed3f6-kube-api-access-wr4bb\") pod \"migrator-59844c95c7-v6d6v\" (UID: \"def7d421-8c6f-4694-96ed-ac9beaeed3f6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v6d6v" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605960 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d579a88b-344f-4198-8070-bc7a7b73bbaf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tktwd\" (UID: \"d579a88b-344f-4198-8070-bc7a7b73bbaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.605987 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65eec553-40f9-421b-bc5b-fd94cfdc3eee-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dn2vc\" (UID: \"65eec553-40f9-421b-bc5b-fd94cfdc3eee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606021 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-bound-sa-token\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606037 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-csi-data-dir\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606054 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8p6s\" (UniqueName: \"kubernetes.io/projected/138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba-kube-api-access-f8p6s\") pod \"service-ca-operator-777779d784-vqwgz\" (UID: \"138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606071 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9db57d95-3aeb-4d52-81fe-d87a474adb7b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lqbck\" (UID: \"9db57d95-3aeb-4d52-81fe-d87a474adb7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606118 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606135 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-client-ca\") pod \"controller-manager-879f6c89f-mmpq8\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606151 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2h8t\" (UniqueName: \"kubernetes.io/projected/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-kube-api-access-h2h8t\") pod \"controller-manager-879f6c89f-mmpq8\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606184 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba-config\") pod \"service-ca-operator-777779d784-vqwgz\" (UID: \"138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606218 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-plugins-dir\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606260 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbhgv\" (UniqueName: \"kubernetes.io/projected/cfedc4d3-7528-4088-ad69-171d2ec1ba14-kube-api-access-qbhgv\") pod \"kube-storage-version-migrator-operator-b67b599dd-qd9g7\" (UID: \"cfedc4d3-7528-4088-ad69-171d2ec1ba14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606280 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3bd541e-8647-49ce-be1c-85f6e5c90f86-config-volume\") pod \"dns-default-n6d8q\" (UID: \"d3bd541e-8647-49ce-be1c-85f6e5c90f86\") " pod="openshift-dns/dns-default-n6d8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606303 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z4fb\" (UniqueName: \"kubernetes.io/projected/6221868d-3674-4d0f-9796-29338e188d50-kube-api-access-2z4fb\") pod \"packageserver-d55dfcdfc-76vrz\" (UID: \"6221868d-3674-4d0f-9796-29338e188d50\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606350 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-serving-cert\") pod \"controller-manager-879f6c89f-mmpq8\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606365 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee-proxy-tls\") pod \"machine-config-operator-74547568cd-62544\" (UID: \"9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606465 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c17752d4-ad64-445d-882f-134f79928b40-registry-certificates\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606759 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee-auth-proxy-config\") pod \"machine-config-operator-74547568cd-62544\" (UID: \"9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.606786 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hmpr\" (UniqueName: \"kubernetes.io/projected/36c9a520-9452-469d-b65e-635f7ea74105-kube-api-access-9hmpr\") pod \"catalog-operator-68c6474976-qlql6\" (UID: \"36c9a520-9452-469d-b65e-635f7ea74105\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.611746 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c17752d4-ad64-445d-882f-134f79928b40-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.611861 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4b124376-3a32-4edb-b447-70fd2bd56e47-srv-cert\") pod \"olm-operator-6b444d44fb-888dw\" (UID: \"4b124376-3a32-4edb-b447-70fd2bd56e47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.611916 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw2d5\" (UniqueName: \"kubernetes.io/projected/374ad670-12b0-4842-a6ca-1cbf355b5a99-kube-api-access-qw2d5\") pod \"collect-profiles-29491605-gwlv7\" (UID: \"374ad670-12b0-4842-a6ca-1cbf355b5a99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.611944 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-registration-dir\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.611975 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fc345c8-36d3-43cb-a15f-1c38b189047d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mzjg4\" (UID: \"5fc345c8-36d3-43cb-a15f-1c38b189047d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.612136 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65eec553-40f9-421b-bc5b-fd94cfdc3eee-config\") pod \"kube-apiserver-operator-766d6c64bb-dn2vc\" (UID: \"65eec553-40f9-421b-bc5b-fd94cfdc3eee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.612220 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fc345c8-36d3-43cb-a15f-1c38b189047d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mzjg4\" (UID: \"5fc345c8-36d3-43cb-a15f-1c38b189047d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.612270 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9nch\" (UniqueName: \"kubernetes.io/projected/2ec25a43-b046-4e04-ab5c-a9cc62862bf9-kube-api-access-j9nch\") pod \"package-server-manager-789f6589d5-xpbg9\" (UID: \"2ec25a43-b046-4e04-ab5c-a9cc62862bf9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.612381 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbtg7\" (UniqueName: \"kubernetes.io/projected/87c928c3-3313-4bb8-afe7-9438975f6f51-kube-api-access-zbtg7\") pod \"ingress-operator-5b745b69d9-b7zbz\" (UID: \"87c928c3-3313-4bb8-afe7-9438975f6f51\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.612414 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f173d12-704f-4431-aa56-05841c81d146-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4lxtl\" (UID: \"3f173d12-704f-4431-aa56-05841c81d146\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.612475 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8587651-0242-4cd8-b60d-1551b4908dfe-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-w5f8q\" (UID: \"a8587651-0242-4cd8-b60d-1551b4908dfe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.612552 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d579a88b-344f-4198-8070-bc7a7b73bbaf-images\") pod \"machine-api-operator-5694c8668f-tktwd\" (UID: \"d579a88b-344f-4198-8070-bc7a7b73bbaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.612589 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj4rj\" (UniqueName: \"kubernetes.io/projected/d579a88b-344f-4198-8070-bc7a7b73bbaf-kube-api-access-nj4rj\") pod \"machine-api-operator-5694c8668f-tktwd\" (UID: \"d579a88b-344f-4198-8070-bc7a7b73bbaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.612614 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5d73faa6-98ab-4066-bf36-1f4a7609fd92-signing-key\") pod \"service-ca-9c57cc56f-nk52m\" (UID: \"5d73faa6-98ab-4066-bf36-1f4a7609fd92\") " pod="openshift-service-ca/service-ca-9c57cc56f-nk52m" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.612634 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba-serving-cert\") pod \"service-ca-operator-777779d784-vqwgz\" (UID: \"138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.612700 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87c928c3-3313-4bb8-afe7-9438975f6f51-metrics-tls\") pod \"ingress-operator-5b745b69d9-b7zbz\" (UID: \"87c928c3-3313-4bb8-afe7-9438975f6f51\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.613097 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/031efb47-2cb0-4323-896b-67cce30690ae-node-bootstrap-token\") pod \"machine-config-server-vlcpq\" (UID: \"031efb47-2cb0-4323-896b-67cce30690ae\") " pod="openshift-machine-config-operator/machine-config-server-vlcpq" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.613196 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfedc4d3-7528-4088-ad69-171d2ec1ba14-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qd9g7\" (UID: \"cfedc4d3-7528-4088-ad69-171d2ec1ba14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.613264 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6221868d-3674-4d0f-9796-29338e188d50-webhook-cert\") pod \"packageserver-d55dfcdfc-76vrz\" (UID: \"6221868d-3674-4d0f-9796-29338e188d50\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.613292 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-socket-dir\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.613316 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b17ad7a-bf74-4ffd-ae08-f02802fdaa22-cert\") pod \"ingress-canary-nqgbm\" (UID: \"7b17ad7a-bf74-4ffd-ae08-f02802fdaa22\") " pod="openshift-ingress-canary/ingress-canary-nqgbm" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.613370 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c17752d4-ad64-445d-882f-134f79928b40-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.613404 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd6cf\" (UniqueName: \"kubernetes.io/projected/be6179fb-ca66-4f99-9ddc-cd50f0952a1d-kube-api-access-pd6cf\") pod \"multus-admission-controller-857f4d67dd-ml5m6\" (UID: \"be6179fb-ca66-4f99-9ddc-cd50f0952a1d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ml5m6" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.614271 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-config\") pod \"controller-manager-879f6c89f-mmpq8\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.614276 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87c928c3-3313-4bb8-afe7-9438975f6f51-trusted-ca\") pod \"ingress-operator-5b745b69d9-b7zbz\" (UID: \"87c928c3-3313-4bb8-afe7-9438975f6f51\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.614342 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4b124376-3a32-4edb-b447-70fd2bd56e47-profile-collector-cert\") pod \"olm-operator-6b444d44fb-888dw\" (UID: \"4b124376-3a32-4edb-b447-70fd2bd56e47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.614368 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f173d12-704f-4431-aa56-05841c81d146-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4lxtl\" (UID: \"3f173d12-704f-4431-aa56-05841c81d146\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.617548 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec25a43-b046-4e04-ab5c-a9cc62862bf9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xpbg9\" (UID: \"2ec25a43-b046-4e04-ab5c-a9cc62862bf9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.618089 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mmpq8\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.618120 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6221868d-3674-4d0f-9796-29338e188d50-tmpfs\") pod \"packageserver-d55dfcdfc-76vrz\" (UID: \"6221868d-3674-4d0f-9796-29338e188d50\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.618216 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5fc345c8-36d3-43cb-a15f-1c38b189047d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mzjg4\" (UID: \"5fc345c8-36d3-43cb-a15f-1c38b189047d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.618237 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be6179fb-ca66-4f99-9ddc-cd50f0952a1d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ml5m6\" (UID: \"be6179fb-ca66-4f99-9ddc-cd50f0952a1d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ml5m6" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.618741 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c17752d4-ad64-445d-882f-134f79928b40-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.622356 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d579a88b-344f-4198-8070-bc7a7b73bbaf-images\") pod \"machine-api-operator-5694c8668f-tktwd\" (UID: \"d579a88b-344f-4198-8070-bc7a7b73bbaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.628559 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-registry-tls\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.629565 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c17752d4-ad64-445d-882f-134f79928b40-trusted-ca\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.636780 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87c928c3-3313-4bb8-afe7-9438975f6f51-metrics-tls\") pod \"ingress-operator-5b745b69d9-b7zbz\" (UID: \"87c928c3-3313-4bb8-afe7-9438975f6f51\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.638591 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87c928c3-3313-4bb8-afe7-9438975f6f51-trusted-ca\") pod \"ingress-operator-5b745b69d9-b7zbz\" (UID: \"87c928c3-3313-4bb8-afe7-9438975f6f51\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.644456 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d579a88b-344f-4198-8070-bc7a7b73bbaf-config\") pod \"machine-api-operator-5694c8668f-tktwd\" (UID: \"d579a88b-344f-4198-8070-bc7a7b73bbaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.657481 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c17752d4-ad64-445d-882f-134f79928b40-registry-certificates\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: E0127 06:56:05.660524 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:06.160504033 +0000 UTC m=+142.687979239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.684989 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c17752d4-ad64-445d-882f-134f79928b40-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.685364 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-client-ca\") pod \"controller-manager-879f6c89f-mmpq8\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.687484 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mmpq8\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.688878 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d579a88b-344f-4198-8070-bc7a7b73bbaf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tktwd\" (UID: \"d579a88b-344f-4198-8070-bc7a7b73bbaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.689408 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-serving-cert\") pod \"controller-manager-879f6c89f-mmpq8\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.702152 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs"] Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.719708 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.720207 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be6179fb-ca66-4f99-9ddc-cd50f0952a1d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ml5m6\" (UID: \"be6179fb-ca66-4f99-9ddc-cd50f0952a1d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ml5m6" Jan 27 06:56:05 crc kubenswrapper[4872]: E0127 06:56:05.720574 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:06.220552437 +0000 UTC m=+142.748027643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.721756 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6221868d-3674-4d0f-9796-29338e188d50-tmpfs\") pod \"packageserver-d55dfcdfc-76vrz\" (UID: \"6221868d-3674-4d0f-9796-29338e188d50\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.721829 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5fc345c8-36d3-43cb-a15f-1c38b189047d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mzjg4\" (UID: \"5fc345c8-36d3-43cb-a15f-1c38b189047d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.721932 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/374ad670-12b0-4842-a6ca-1cbf355b5a99-config-volume\") pod \"collect-profiles-29491605-gwlv7\" (UID: \"374ad670-12b0-4842-a6ca-1cbf355b5a99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.721977 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa0d3672-506e-417a-8528-ba9e7ea8a2ba-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dx5vk\" (UID: \"aa0d3672-506e-417a-8528-ba9e7ea8a2ba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dx5vk" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.722261 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfedc4d3-7528-4088-ad69-171d2ec1ba14-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qd9g7\" (UID: \"cfedc4d3-7528-4088-ad69-171d2ec1ba14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.722428 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdsgn\" (UniqueName: \"kubernetes.io/projected/4b124376-3a32-4edb-b447-70fd2bd56e47-kube-api-access-wdsgn\") pod \"olm-operator-6b444d44fb-888dw\" (UID: \"4b124376-3a32-4edb-b447-70fd2bd56e47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.722605 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db57d95-3aeb-4d52-81fe-d87a474adb7b-config\") pod \"kube-controller-manager-operator-78b949d7b-lqbck\" (UID: \"9db57d95-3aeb-4d52-81fe-d87a474adb7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.722635 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xs6x\" (UniqueName: \"kubernetes.io/projected/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-kube-api-access-6xs6x\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.722692 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f88d5\" (UniqueName: \"kubernetes.io/projected/aa0d3672-506e-417a-8528-ba9e7ea8a2ba-kube-api-access-f88d5\") pod \"control-plane-machine-set-operator-78cbb6b69f-dx5vk\" (UID: \"aa0d3672-506e-417a-8528-ba9e7ea8a2ba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dx5vk" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.722737 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlhc9\" (UniqueName: \"kubernetes.io/projected/9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee-kube-api-access-tlhc9\") pod \"machine-config-operator-74547568cd-62544\" (UID: \"9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.722789 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d3bd541e-8647-49ce-be1c-85f6e5c90f86-metrics-tls\") pod \"dns-default-n6d8q\" (UID: \"d3bd541e-8647-49ce-be1c-85f6e5c90f86\") " pod="openshift-dns/dns-default-n6d8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.722809 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/374ad670-12b0-4842-a6ca-1cbf355b5a99-secret-volume\") pod \"collect-profiles-29491605-gwlv7\" (UID: \"374ad670-12b0-4842-a6ca-1cbf355b5a99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.722855 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65eec553-40f9-421b-bc5b-fd94cfdc3eee-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dn2vc\" (UID: \"65eec553-40f9-421b-bc5b-fd94cfdc3eee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.722874 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5d73faa6-98ab-4066-bf36-1f4a7609fd92-signing-cabundle\") pod \"service-ca-9c57cc56f-nk52m\" (UID: \"5d73faa6-98ab-4066-bf36-1f4a7609fd92\") " pod="openshift-service-ca/service-ca-9c57cc56f-nk52m" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.722899 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqh77\" (UniqueName: \"kubernetes.io/projected/031efb47-2cb0-4323-896b-67cce30690ae-kube-api-access-jqh77\") pod \"machine-config-server-vlcpq\" (UID: \"031efb47-2cb0-4323-896b-67cce30690ae\") " pod="openshift-machine-config-operator/machine-config-server-vlcpq" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.722952 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8587651-0242-4cd8-b60d-1551b4908dfe-proxy-tls\") pod \"machine-config-controller-84d6567774-w5f8q\" (UID: \"a8587651-0242-4cd8-b60d-1551b4908dfe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.722976 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9db57d95-3aeb-4d52-81fe-d87a474adb7b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lqbck\" (UID: \"9db57d95-3aeb-4d52-81fe-d87a474adb7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.723020 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzxzj\" (UniqueName: \"kubernetes.io/projected/7b17ad7a-bf74-4ffd-ae08-f02802fdaa22-kube-api-access-tzxzj\") pod \"ingress-canary-nqgbm\" (UID: \"7b17ad7a-bf74-4ffd-ae08-f02802fdaa22\") " pod="openshift-ingress-canary/ingress-canary-nqgbm" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.723055 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9chd\" (UniqueName: \"kubernetes.io/projected/3f173d12-704f-4431-aa56-05841c81d146-kube-api-access-s9chd\") pod \"marketplace-operator-79b997595-4lxtl\" (UID: \"3f173d12-704f-4431-aa56-05841c81d146\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.723097 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee-images\") pod \"machine-config-operator-74547568cd-62544\" (UID: \"9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.723136 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvwkq\" (UniqueName: \"kubernetes.io/projected/5d73faa6-98ab-4066-bf36-1f4a7609fd92-kube-api-access-wvwkq\") pod \"service-ca-9c57cc56f-nk52m\" (UID: \"5d73faa6-98ab-4066-bf36-1f4a7609fd92\") " pod="openshift-service-ca/service-ca-9c57cc56f-nk52m" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.723170 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/36c9a520-9452-469d-b65e-635f7ea74105-srv-cert\") pod \"catalog-operator-68c6474976-qlql6\" (UID: \"36c9a520-9452-469d-b65e-635f7ea74105\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.723187 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6221868d-3674-4d0f-9796-29338e188d50-apiservice-cert\") pod \"packageserver-d55dfcdfc-76vrz\" (UID: \"6221868d-3674-4d0f-9796-29338e188d50\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.723349 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv7xf\" (UniqueName: \"kubernetes.io/projected/d3bd541e-8647-49ce-be1c-85f6e5c90f86-kube-api-access-cv7xf\") pod \"dns-default-n6d8q\" (UID: \"d3bd541e-8647-49ce-be1c-85f6e5c90f86\") " pod="openshift-dns/dns-default-n6d8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.723378 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/36c9a520-9452-469d-b65e-635f7ea74105-profile-collector-cert\") pod \"catalog-operator-68c6474976-qlql6\" (UID: \"36c9a520-9452-469d-b65e-635f7ea74105\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.723397 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrvph\" (UniqueName: \"kubernetes.io/projected/a8587651-0242-4cd8-b60d-1551b4908dfe-kube-api-access-vrvph\") pod \"machine-config-controller-84d6567774-w5f8q\" (UID: \"a8587651-0242-4cd8-b60d-1551b4908dfe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.723441 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-mountpoint-dir\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.723456 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/031efb47-2cb0-4323-896b-67cce30690ae-certs\") pod \"machine-config-server-vlcpq\" (UID: \"031efb47-2cb0-4323-896b-67cce30690ae\") " pod="openshift-machine-config-operator/machine-config-server-vlcpq" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.723480 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr4bb\" (UniqueName: \"kubernetes.io/projected/def7d421-8c6f-4694-96ed-ac9beaeed3f6-kube-api-access-wr4bb\") pod \"migrator-59844c95c7-v6d6v\" (UID: \"def7d421-8c6f-4694-96ed-ac9beaeed3f6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v6d6v" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.723964 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65eec553-40f9-421b-bc5b-fd94cfdc3eee-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dn2vc\" (UID: \"65eec553-40f9-421b-bc5b-fd94cfdc3eee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724182 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-csi-data-dir\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724212 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8p6s\" (UniqueName: \"kubernetes.io/projected/138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba-kube-api-access-f8p6s\") pod \"service-ca-operator-777779d784-vqwgz\" (UID: \"138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724235 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9db57d95-3aeb-4d52-81fe-d87a474adb7b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lqbck\" (UID: \"9db57d95-3aeb-4d52-81fe-d87a474adb7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724254 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba-config\") pod \"service-ca-operator-777779d784-vqwgz\" (UID: \"138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724283 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724311 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbhgv\" (UniqueName: \"kubernetes.io/projected/cfedc4d3-7528-4088-ad69-171d2ec1ba14-kube-api-access-qbhgv\") pod \"kube-storage-version-migrator-operator-b67b599dd-qd9g7\" (UID: \"cfedc4d3-7528-4088-ad69-171d2ec1ba14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724330 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3bd541e-8647-49ce-be1c-85f6e5c90f86-config-volume\") pod \"dns-default-n6d8q\" (UID: \"d3bd541e-8647-49ce-be1c-85f6e5c90f86\") " pod="openshift-dns/dns-default-n6d8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724350 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-plugins-dir\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724371 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z4fb\" (UniqueName: \"kubernetes.io/projected/6221868d-3674-4d0f-9796-29338e188d50-kube-api-access-2z4fb\") pod \"packageserver-d55dfcdfc-76vrz\" (UID: \"6221868d-3674-4d0f-9796-29338e188d50\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724392 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee-proxy-tls\") pod \"machine-config-operator-74547568cd-62544\" (UID: \"9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724410 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee-auth-proxy-config\") pod \"machine-config-operator-74547568cd-62544\" (UID: \"9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724439 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hmpr\" (UniqueName: \"kubernetes.io/projected/36c9a520-9452-469d-b65e-635f7ea74105-kube-api-access-9hmpr\") pod \"catalog-operator-68c6474976-qlql6\" (UID: \"36c9a520-9452-469d-b65e-635f7ea74105\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724462 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4b124376-3a32-4edb-b447-70fd2bd56e47-srv-cert\") pod \"olm-operator-6b444d44fb-888dw\" (UID: \"4b124376-3a32-4edb-b447-70fd2bd56e47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724484 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw2d5\" (UniqueName: \"kubernetes.io/projected/374ad670-12b0-4842-a6ca-1cbf355b5a99-kube-api-access-qw2d5\") pod \"collect-profiles-29491605-gwlv7\" (UID: \"374ad670-12b0-4842-a6ca-1cbf355b5a99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724501 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-registration-dir\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724520 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fc345c8-36d3-43cb-a15f-1c38b189047d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mzjg4\" (UID: \"5fc345c8-36d3-43cb-a15f-1c38b189047d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724563 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65eec553-40f9-421b-bc5b-fd94cfdc3eee-config\") pod \"kube-apiserver-operator-766d6c64bb-dn2vc\" (UID: \"65eec553-40f9-421b-bc5b-fd94cfdc3eee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724582 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fc345c8-36d3-43cb-a15f-1c38b189047d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mzjg4\" (UID: \"5fc345c8-36d3-43cb-a15f-1c38b189047d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724601 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9nch\" (UniqueName: \"kubernetes.io/projected/2ec25a43-b046-4e04-ab5c-a9cc62862bf9-kube-api-access-j9nch\") pod \"package-server-manager-789f6589d5-xpbg9\" (UID: \"2ec25a43-b046-4e04-ab5c-a9cc62862bf9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724627 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f173d12-704f-4431-aa56-05841c81d146-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4lxtl\" (UID: \"3f173d12-704f-4431-aa56-05841c81d146\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724658 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8587651-0242-4cd8-b60d-1551b4908dfe-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-w5f8q\" (UID: \"a8587651-0242-4cd8-b60d-1551b4908dfe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724675 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba-serving-cert\") pod \"service-ca-operator-777779d784-vqwgz\" (UID: \"138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724700 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5d73faa6-98ab-4066-bf36-1f4a7609fd92-signing-key\") pod \"service-ca-9c57cc56f-nk52m\" (UID: \"5d73faa6-98ab-4066-bf36-1f4a7609fd92\") " pod="openshift-service-ca/service-ca-9c57cc56f-nk52m" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724729 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/031efb47-2cb0-4323-896b-67cce30690ae-node-bootstrap-token\") pod \"machine-config-server-vlcpq\" (UID: \"031efb47-2cb0-4323-896b-67cce30690ae\") " pod="openshift-machine-config-operator/machine-config-server-vlcpq" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724749 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfedc4d3-7528-4088-ad69-171d2ec1ba14-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qd9g7\" (UID: \"cfedc4d3-7528-4088-ad69-171d2ec1ba14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724770 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6221868d-3674-4d0f-9796-29338e188d50-webhook-cert\") pod \"packageserver-d55dfcdfc-76vrz\" (UID: \"6221868d-3674-4d0f-9796-29338e188d50\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724787 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-socket-dir\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724803 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b17ad7a-bf74-4ffd-ae08-f02802fdaa22-cert\") pod \"ingress-canary-nqgbm\" (UID: \"7b17ad7a-bf74-4ffd-ae08-f02802fdaa22\") " pod="openshift-ingress-canary/ingress-canary-nqgbm" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724824 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd6cf\" (UniqueName: \"kubernetes.io/projected/be6179fb-ca66-4f99-9ddc-cd50f0952a1d-kube-api-access-pd6cf\") pod \"multus-admission-controller-857f4d67dd-ml5m6\" (UID: \"be6179fb-ca66-4f99-9ddc-cd50f0952a1d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ml5m6" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724862 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4b124376-3a32-4edb-b447-70fd2bd56e47-profile-collector-cert\") pod \"olm-operator-6b444d44fb-888dw\" (UID: \"4b124376-3a32-4edb-b447-70fd2bd56e47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724879 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f173d12-704f-4431-aa56-05841c81d146-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4lxtl\" (UID: \"3f173d12-704f-4431-aa56-05841c81d146\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724900 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec25a43-b046-4e04-ab5c-a9cc62862bf9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xpbg9\" (UID: \"2ec25a43-b046-4e04-ab5c-a9cc62862bf9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.724981 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btmdk\" (UniqueName: \"kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-kube-api-access-btmdk\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.725284 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbtg7\" (UniqueName: \"kubernetes.io/projected/87c928c3-3313-4bb8-afe7-9438975f6f51-kube-api-access-zbtg7\") pod \"ingress-operator-5b745b69d9-b7zbz\" (UID: \"87c928c3-3313-4bb8-afe7-9438975f6f51\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.725904 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-csi-data-dir\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.726254 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-mountpoint-dir\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.727138 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6221868d-3674-4d0f-9796-29338e188d50-tmpfs\") pod \"packageserver-d55dfcdfc-76vrz\" (UID: \"6221868d-3674-4d0f-9796-29338e188d50\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.727328 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db57d95-3aeb-4d52-81fe-d87a474adb7b-config\") pod \"kube-controller-manager-operator-78b949d7b-lqbck\" (UID: \"9db57d95-3aeb-4d52-81fe-d87a474adb7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.729959 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65eec553-40f9-421b-bc5b-fd94cfdc3eee-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dn2vc\" (UID: \"65eec553-40f9-421b-bc5b-fd94cfdc3eee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.730321 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8587651-0242-4cd8-b60d-1551b4908dfe-proxy-tls\") pod \"machine-config-controller-84d6567774-w5f8q\" (UID: \"a8587651-0242-4cd8-b60d-1551b4908dfe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.730631 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5d73faa6-98ab-4066-bf36-1f4a7609fd92-signing-cabundle\") pod \"service-ca-9c57cc56f-nk52m\" (UID: \"5d73faa6-98ab-4066-bf36-1f4a7609fd92\") " pod="openshift-service-ca/service-ca-9c57cc56f-nk52m" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.731529 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba-config\") pod \"service-ca-operator-777779d784-vqwgz\" (UID: \"138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz" Jan 27 06:56:05 crc kubenswrapper[4872]: E0127 06:56:05.731825 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:06.231811084 +0000 UTC m=+142.759286270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.731879 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee-images\") pod \"machine-config-operator-74547568cd-62544\" (UID: \"9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.732745 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3bd541e-8647-49ce-be1c-85f6e5c90f86-config-volume\") pod \"dns-default-n6d8q\" (UID: \"d3bd541e-8647-49ce-be1c-85f6e5c90f86\") " pod="openshift-dns/dns-default-n6d8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.732776 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj4rj\" (UniqueName: \"kubernetes.io/projected/d579a88b-344f-4198-8070-bc7a7b73bbaf-kube-api-access-nj4rj\") pod \"machine-api-operator-5694c8668f-tktwd\" (UID: \"d579a88b-344f-4198-8070-bc7a7b73bbaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.732878 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-bound-sa-token\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.733377 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9db57d95-3aeb-4d52-81fe-d87a474adb7b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lqbck\" (UID: \"9db57d95-3aeb-4d52-81fe-d87a474adb7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.733866 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-plugins-dir\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.734725 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f173d12-704f-4431-aa56-05841c81d146-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4lxtl\" (UID: \"3f173d12-704f-4431-aa56-05841c81d146\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.734899 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-socket-dir\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.737706 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d3bd541e-8647-49ce-be1c-85f6e5c90f86-metrics-tls\") pod \"dns-default-n6d8q\" (UID: \"d3bd541e-8647-49ce-be1c-85f6e5c90f86\") " pod="openshift-dns/dns-default-n6d8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.737768 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87c928c3-3313-4bb8-afe7-9438975f6f51-bound-sa-token\") pod \"ingress-operator-5b745b69d9-b7zbz\" (UID: \"87c928c3-3313-4bb8-afe7-9438975f6f51\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.737975 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8587651-0242-4cd8-b60d-1551b4908dfe-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-w5f8q\" (UID: \"a8587651-0242-4cd8-b60d-1551b4908dfe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.738771 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/374ad670-12b0-4842-a6ca-1cbf355b5a99-config-volume\") pod \"collect-profiles-29491605-gwlv7\" (UID: \"374ad670-12b0-4842-a6ca-1cbf355b5a99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.739676 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee-auth-proxy-config\") pod \"machine-config-operator-74547568cd-62544\" (UID: \"9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.739741 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfedc4d3-7528-4088-ad69-171d2ec1ba14-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qd9g7\" (UID: \"cfedc4d3-7528-4088-ad69-171d2ec1ba14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.740461 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec25a43-b046-4e04-ab5c-a9cc62862bf9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xpbg9\" (UID: \"2ec25a43-b046-4e04-ab5c-a9cc62862bf9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.742215 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6221868d-3674-4d0f-9796-29338e188d50-apiservice-cert\") pod \"packageserver-d55dfcdfc-76vrz\" (UID: \"6221868d-3674-4d0f-9796-29338e188d50\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.742274 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/031efb47-2cb0-4323-896b-67cce30690ae-certs\") pod \"machine-config-server-vlcpq\" (UID: \"031efb47-2cb0-4323-896b-67cce30690ae\") " pod="openshift-machine-config-operator/machine-config-server-vlcpq" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.744095 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be6179fb-ca66-4f99-9ddc-cd50f0952a1d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ml5m6\" (UID: \"be6179fb-ca66-4f99-9ddc-cd50f0952a1d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ml5m6" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.744326 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fc345c8-36d3-43cb-a15f-1c38b189047d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mzjg4\" (UID: \"5fc345c8-36d3-43cb-a15f-1c38b189047d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.744655 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/031efb47-2cb0-4323-896b-67cce30690ae-node-bootstrap-token\") pod \"machine-config-server-vlcpq\" (UID: \"031efb47-2cb0-4323-896b-67cce30690ae\") " pod="openshift-machine-config-operator/machine-config-server-vlcpq" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.744770 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-registration-dir\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.744966 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee-proxy-tls\") pod \"machine-config-operator-74547568cd-62544\" (UID: \"9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.745348 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba-serving-cert\") pod \"service-ca-operator-777779d784-vqwgz\" (UID: \"138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.746406 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65eec553-40f9-421b-bc5b-fd94cfdc3eee-config\") pod \"kube-apiserver-operator-766d6c64bb-dn2vc\" (UID: \"65eec553-40f9-421b-bc5b-fd94cfdc3eee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.746557 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/36c9a520-9452-469d-b65e-635f7ea74105-srv-cert\") pod \"catalog-operator-68c6474976-qlql6\" (UID: \"36c9a520-9452-469d-b65e-635f7ea74105\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.748748 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/374ad670-12b0-4842-a6ca-1cbf355b5a99-secret-volume\") pod \"collect-profiles-29491605-gwlv7\" (UID: \"374ad670-12b0-4842-a6ca-1cbf355b5a99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.749458 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/36c9a520-9452-469d-b65e-635f7ea74105-profile-collector-cert\") pod \"catalog-operator-68c6474976-qlql6\" (UID: \"36c9a520-9452-469d-b65e-635f7ea74105\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.751102 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f173d12-704f-4431-aa56-05841c81d146-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4lxtl\" (UID: \"3f173d12-704f-4431-aa56-05841c81d146\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.752562 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b17ad7a-bf74-4ffd-ae08-f02802fdaa22-cert\") pod \"ingress-canary-nqgbm\" (UID: \"7b17ad7a-bf74-4ffd-ae08-f02802fdaa22\") " pod="openshift-ingress-canary/ingress-canary-nqgbm" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.753087 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2h8t\" (UniqueName: \"kubernetes.io/projected/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-kube-api-access-h2h8t\") pod \"controller-manager-879f6c89f-mmpq8\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.755977 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa0d3672-506e-417a-8528-ba9e7ea8a2ba-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dx5vk\" (UID: \"aa0d3672-506e-417a-8528-ba9e7ea8a2ba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dx5vk" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.758817 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5d73faa6-98ab-4066-bf36-1f4a7609fd92-signing-key\") pod \"service-ca-9c57cc56f-nk52m\" (UID: \"5d73faa6-98ab-4066-bf36-1f4a7609fd92\") " pod="openshift-service-ca/service-ca-9c57cc56f-nk52m" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.766116 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6221868d-3674-4d0f-9796-29338e188d50-webhook-cert\") pod \"packageserver-d55dfcdfc-76vrz\" (UID: \"6221868d-3674-4d0f-9796-29338e188d50\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.766548 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fc345c8-36d3-43cb-a15f-1c38b189047d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mzjg4\" (UID: \"5fc345c8-36d3-43cb-a15f-1c38b189047d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.767344 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5fc345c8-36d3-43cb-a15f-1c38b189047d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mzjg4\" (UID: \"5fc345c8-36d3-43cb-a15f-1c38b189047d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.780953 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfedc4d3-7528-4088-ad69-171d2ec1ba14-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qd9g7\" (UID: \"cfedc4d3-7528-4088-ad69-171d2ec1ba14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.799119 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4b124376-3a32-4edb-b447-70fd2bd56e47-profile-collector-cert\") pod \"olm-operator-6b444d44fb-888dw\" (UID: \"4b124376-3a32-4edb-b447-70fd2bd56e47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.799341 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.799935 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4b124376-3a32-4edb-b447-70fd2bd56e47-srv-cert\") pod \"olm-operator-6b444d44fb-888dw\" (UID: \"4b124376-3a32-4edb-b447-70fd2bd56e47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.814710 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdsgn\" (UniqueName: \"kubernetes.io/projected/4b124376-3a32-4edb-b447-70fd2bd56e47-kube-api-access-wdsgn\") pod \"olm-operator-6b444d44fb-888dw\" (UID: \"4b124376-3a32-4edb-b447-70fd2bd56e47\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" Jan 27 06:56:05 crc kubenswrapper[4872]: W0127 06:56:05.814830 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ec6eaa6_0516_4dad_8481_ef8bf49935de.slice/crio-d567fc4c3c638fc845f7593a7862b4438aaee5cfe5c159472081a788f87b1c43 WatchSource:0}: Error finding container d567fc4c3c638fc845f7593a7862b4438aaee5cfe5c159472081a788f87b1c43: Status 404 returned error can't find the container with id d567fc4c3c638fc845f7593a7862b4438aaee5cfe5c159472081a788f87b1c43 Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.820269 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9db57d95-3aeb-4d52-81fe-d87a474adb7b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lqbck\" (UID: \"9db57d95-3aeb-4d52-81fe-d87a474adb7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.824086 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.829319 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:05 crc kubenswrapper[4872]: E0127 06:56:05.829742 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:06.329728075 +0000 UTC m=+142.857203271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.831799 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.839570 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xs6x\" (UniqueName: \"kubernetes.io/projected/b5ca7461-c53d-4337-b195-1a7b3e1a0d71-kube-api-access-6xs6x\") pod \"csi-hostpathplugin-6k8gx\" (UID: \"b5ca7461-c53d-4337-b195-1a7b3e1a0d71\") " pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.854502 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f88d5\" (UniqueName: \"kubernetes.io/projected/aa0d3672-506e-417a-8528-ba9e7ea8a2ba-kube-api-access-f88d5\") pod \"control-plane-machine-set-operator-78cbb6b69f-dx5vk\" (UID: \"aa0d3672-506e-417a-8528-ba9e7ea8a2ba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dx5vk" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.876580 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlhc9\" (UniqueName: \"kubernetes.io/projected/9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee-kube-api-access-tlhc9\") pod \"machine-config-operator-74547568cd-62544\" (UID: \"9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.890757 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqh77\" (UniqueName: \"kubernetes.io/projected/031efb47-2cb0-4323-896b-67cce30690ae-kube-api-access-jqh77\") pod \"machine-config-server-vlcpq\" (UID: \"031efb47-2cb0-4323-896b-67cce30690ae\") " pod="openshift-machine-config-operator/machine-config-server-vlcpq" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.898940 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dx5vk" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.899625 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gdn9v" event={"ID":"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595","Type":"ContainerStarted","Data":"126696e39d29a007f161455a986f4474991401d30b90c7bc4d32ad6c1242b9ba"} Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.905485 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr4bb\" (UniqueName: \"kubernetes.io/projected/def7d421-8c6f-4694-96ed-ac9beaeed3f6-kube-api-access-wr4bb\") pod \"migrator-59844c95c7-v6d6v\" (UID: \"def7d421-8c6f-4694-96ed-ac9beaeed3f6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v6d6v" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.922648 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.931289 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:05 crc kubenswrapper[4872]: E0127 06:56:05.931637 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:06.431625319 +0000 UTC m=+142.959100515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.933191 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-phgkc" event={"ID":"fa6b6759-481a-407c-97a8-d85918a467d7","Type":"ContainerStarted","Data":"6e58629d46baeee09b7575d2b112ee23b7e4cb6576d39f68638d918722495898"} Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.933233 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-phgkc" event={"ID":"fa6b6759-481a-407c-97a8-d85918a467d7","Type":"ContainerStarted","Data":"cf2881e95aec9776f51dc70fc644a711afd2a4c9fece7db99729905b9a844a5d"} Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.934071 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-phgkc" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.934466 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrvph\" (UniqueName: \"kubernetes.io/projected/a8587651-0242-4cd8-b60d-1551b4908dfe-kube-api-access-vrvph\") pod \"machine-config-controller-84d6567774-w5f8q\" (UID: \"a8587651-0242-4cd8-b60d-1551b4908dfe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.946037 4872 patch_prober.go:28] interesting pod/console-operator-58897d9998-phgkc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/readyz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.946100 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-phgkc" podUID="fa6b6759-481a-407c-97a8-d85918a467d7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/readyz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.948829 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" event={"ID":"c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f","Type":"ContainerStarted","Data":"b5deab50151e8d8af0015c014d119f23adc911b05f0dd298657d3ea5a7024456"} Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.948887 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" event={"ID":"c8eaa3c3-f5c0-46dc-b094-c1e39b07f73f","Type":"ContainerStarted","Data":"7ebf32ebd050a045c6773d0d29d1892dd3fa644b8c3b73e859aa0d350b4c1f93"} Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.950944 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ntnst" event={"ID":"8cfa7f72-9e39-485f-894a-276893a688e1","Type":"ContainerStarted","Data":"d3238bd791686210394dc3e0677ec97e87d3412ca420fa7cafed9dbb45fca393"} Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.950969 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ntnst" event={"ID":"8cfa7f72-9e39-485f-894a-276893a688e1","Type":"ContainerStarted","Data":"00275fe108eab5dab172a2c1e7046f5bc2fa279d90bd66cb6d904cc059380e96"} Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.955693 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65eec553-40f9-421b-bc5b-fd94cfdc3eee-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dn2vc\" (UID: \"65eec553-40f9-421b-bc5b-fd94cfdc3eee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.968093 4872 generic.go:334] "Generic (PLEG): container finished" podID="a413fe38-46c6-4603-96cc-4667937fe849" containerID="386726925ad07d6b14c761bcf9e417a26f477d1114f5782c8a38e7e0ac2c572f" exitCode=0 Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.968181 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jghd5" event={"ID":"a413fe38-46c6-4603-96cc-4667937fe849","Type":"ContainerDied","Data":"386726925ad07d6b14c761bcf9e417a26f477d1114f5782c8a38e7e0ac2c572f"} Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.968209 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jghd5" event={"ID":"a413fe38-46c6-4603-96cc-4667937fe849","Type":"ContainerStarted","Data":"b2f969f1277a06cc5f39194e78d1bfa7d5de1c854623545dc3807645bdaee479"} Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.977600 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" event={"ID":"2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a","Type":"ContainerStarted","Data":"1889425b960d68a7f96104f0e492fce51f0148214934ca87cdd4962e89db755f"} Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.977653 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" event={"ID":"2a3c2ef4-f90a-41f7-9ed6-69db3d0f7e1a","Type":"ContainerStarted","Data":"e53d2b059dd0dc92cb8a630f4e2a9ce77d1e4fb252bc69f2e417339569529dbe"} Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.980423 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8p6s\" (UniqueName: \"kubernetes.io/projected/138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba-kube-api-access-f8p6s\") pod \"service-ca-operator-777779d784-vqwgz\" (UID: \"138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.980783 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.986250 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv7xf\" (UniqueName: \"kubernetes.io/projected/d3bd541e-8647-49ce-be1c-85f6e5c90f86-kube-api-access-cv7xf\") pod \"dns-default-n6d8q\" (UID: \"d3bd541e-8647-49ce-be1c-85f6e5c90f86\") " pod="openshift-dns/dns-default-n6d8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.986384 4872 generic.go:334] "Generic (PLEG): container finished" podID="86a2fead-ca68-40da-9054-4dd764d24686" containerID="aa2aa3ac9fbbe74ac45346070a0e281668fac581d5dbb6f8828e84c34bc70a04" exitCode=0 Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.987167 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" event={"ID":"86a2fead-ca68-40da-9054-4dd764d24686","Type":"ContainerDied","Data":"aa2aa3ac9fbbe74ac45346070a0e281668fac581d5dbb6f8828e84c34bc70a04"} Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.987195 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" event={"ID":"86a2fead-ca68-40da-9054-4dd764d24686","Type":"ContainerStarted","Data":"09d67a2750777fc1db508a18fa296be59837ddc0e48509bf38e436b61e6689d9"} Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.990223 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.992276 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" event={"ID":"652122b5-378f-48ab-9d77-d30cda97a77d","Type":"ContainerStarted","Data":"709adf81f77aea8c299dc89cecfe588c719552c0839c17d089fcf072d511c9f0"} Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.995677 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-n6d8q" Jan 27 06:56:05 crc kubenswrapper[4872]: I0127 06:56:05.999588 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" event={"ID":"2ec6eaa6-0516-4dad-8481-ef8bf49935de","Type":"ContainerStarted","Data":"d567fc4c3c638fc845f7593a7862b4438aaee5cfe5c159472081a788f87b1c43"} Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.001625 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" event={"ID":"cac19a4e-9870-4f41-8fd6-26126ab86c21","Type":"ContainerStarted","Data":"2ac835c8e657a851d85dd93d210848bf4109a80df18ecf65aeed0c6d593e1c70"} Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.011087 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vlcpq" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.013161 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzxzj\" (UniqueName: \"kubernetes.io/projected/7b17ad7a-bf74-4ffd-ae08-f02802fdaa22-kube-api-access-tzxzj\") pod \"ingress-canary-nqgbm\" (UID: \"7b17ad7a-bf74-4ffd-ae08-f02802fdaa22\") " pod="openshift-ingress-canary/ingress-canary-nqgbm" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.017864 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.028488 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9chd\" (UniqueName: \"kubernetes.io/projected/3f173d12-704f-4431-aa56-05841c81d146-kube-api-access-s9chd\") pod \"marketplace-operator-79b997595-4lxtl\" (UID: \"3f173d12-704f-4431-aa56-05841c81d146\") " pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.032682 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:06 crc kubenswrapper[4872]: E0127 06:56:06.035453 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:06.535437406 +0000 UTC m=+143.062912592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.055820 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvwkq\" (UniqueName: \"kubernetes.io/projected/5d73faa6-98ab-4066-bf36-1f4a7609fd92-kube-api-access-wvwkq\") pod \"service-ca-9c57cc56f-nk52m\" (UID: \"5d73faa6-98ab-4066-bf36-1f4a7609fd92\") " pod="openshift-service-ca/service-ca-9c57cc56f-nk52m" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.066603 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbhgv\" (UniqueName: \"kubernetes.io/projected/cfedc4d3-7528-4088-ad69-171d2ec1ba14-kube-api-access-qbhgv\") pod \"kube-storage-version-migrator-operator-b67b599dd-qd9g7\" (UID: \"cfedc4d3-7528-4088-ad69-171d2ec1ba14\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.110978 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.117291 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z4fb\" (UniqueName: \"kubernetes.io/projected/6221868d-3674-4d0f-9796-29338e188d50-kube-api-access-2z4fb\") pod \"packageserver-d55dfcdfc-76vrz\" (UID: \"6221868d-3674-4d0f-9796-29338e188d50\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.117929 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hmpr\" (UniqueName: \"kubernetes.io/projected/36c9a520-9452-469d-b65e-635f7ea74105-kube-api-access-9hmpr\") pod \"catalog-operator-68c6474976-qlql6\" (UID: \"36c9a520-9452-469d-b65e-635f7ea74105\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.135369 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd6cf\" (UniqueName: \"kubernetes.io/projected/be6179fb-ca66-4f99-9ddc-cd50f0952a1d-kube-api-access-pd6cf\") pod \"multus-admission-controller-857f4d67dd-ml5m6\" (UID: \"be6179fb-ca66-4f99-9ddc-cd50f0952a1d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ml5m6" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.139150 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:06 crc kubenswrapper[4872]: E0127 06:56:06.139458 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:06.639446609 +0000 UTC m=+143.166921805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.140713 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.141380 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6zmz"] Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.141409 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6plnf"] Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.149223 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.151166 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vvsln"] Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.157129 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw2d5\" (UniqueName: \"kubernetes.io/projected/374ad670-12b0-4842-a6ca-1cbf355b5a99-kube-api-access-qw2d5\") pod \"collect-profiles-29491605-gwlv7\" (UID: \"374ad670-12b0-4842-a6ca-1cbf355b5a99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.162184 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v6d6v" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.169658 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.179910 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.182577 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9nch\" (UniqueName: \"kubernetes.io/projected/2ec25a43-b046-4e04-ab5c-a9cc62862bf9-kube-api-access-j9nch\") pod \"package-server-manager-789f6589d5-xpbg9\" (UID: \"2ec25a43-b046-4e04-ab5c-a9cc62862bf9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.186882 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.214756 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.219790 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.230100 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-nk52m" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.252604 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:06 crc kubenswrapper[4872]: E0127 06:56:06.252967 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:06.752953009 +0000 UTC m=+143.280428205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.258030 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz"] Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.258337 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.274319 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.281373 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb"] Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.306950 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nqgbm" Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.318109 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9"] Jan 27 06:56:06 crc kubenswrapper[4872]: W0127 06:56:06.326003 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfecd0f15_a29c_4508_af28_9169b2cf96b7.slice/crio-711fd3d4ee96307d34bc7867c977a94aa21fdd1371edba215787495a4ac3161a WatchSource:0}: Error finding container 711fd3d4ee96307d34bc7867c977a94aa21fdd1371edba215787495a4ac3161a: Status 404 returned error can't find the container with id 711fd3d4ee96307d34bc7867c977a94aa21fdd1371edba215787495a4ac3161a Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.331229 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kp6fd"] Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.338120 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh"] Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.354020 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:06 crc kubenswrapper[4872]: E0127 06:56:06.355743 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:06.855722378 +0000 UTC m=+143.383197584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:06 crc kubenswrapper[4872]: W0127 06:56:06.401613 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod687ef6ff_0cea_474f_892a_fdca8d9e386e.slice/crio-7edaf99f34efd64d1c9ab28a5e32f3d9c4bce0f219e4948f8428d34ce9861ad6 WatchSource:0}: Error finding container 7edaf99f34efd64d1c9ab28a5e32f3d9c4bce0f219e4948f8428d34ce9861ad6: Status 404 returned error can't find the container with id 7edaf99f34efd64d1c9ab28a5e32f3d9c4bce0f219e4948f8428d34ce9861ad6 Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.416564 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ml5m6" Jan 27 06:56:06 crc kubenswrapper[4872]: W0127 06:56:06.417391 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34692f41_ceea_4bcf_a05d_8b0ceca661df.slice/crio-743ef08a9bae0ac2380c9cd6ab3a25ef99e7c640b5c62f2309a1e781dd907f44 WatchSource:0}: Error finding container 743ef08a9bae0ac2380c9cd6ab3a25ef99e7c640b5c62f2309a1e781dd907f44: Status 404 returned error can't find the container with id 743ef08a9bae0ac2380c9cd6ab3a25ef99e7c640b5c62f2309a1e781dd907f44 Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.456133 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:06 crc kubenswrapper[4872]: E0127 06:56:06.456304 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:06.956290364 +0000 UTC m=+143.483765550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.456327 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck"] Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.456741 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:06 crc kubenswrapper[4872]: E0127 06:56:06.457688 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:06.957672613 +0000 UTC m=+143.485147799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:06 crc kubenswrapper[4872]: W0127 06:56:06.503228 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1853449_b48b_49ed_aeee_4c0dce155450.slice/crio-5b4ae25c7bffc3603db522dfb3d3343107e76b5d2eb59e673c30f90c2377e173 WatchSource:0}: Error finding container 5b4ae25c7bffc3603db522dfb3d3343107e76b5d2eb59e673c30f90c2377e173: Status 404 returned error can't find the container with id 5b4ae25c7bffc3603db522dfb3d3343107e76b5d2eb59e673c30f90c2377e173 Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.524372 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4"] Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.558442 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:06 crc kubenswrapper[4872]: E0127 06:56:06.558662 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:07.05862127 +0000 UTC m=+143.586096466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.558804 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:06 crc kubenswrapper[4872]: E0127 06:56:06.559213 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:07.059204516 +0000 UTC m=+143.586679712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.636522 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw"] Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.660990 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:06 crc kubenswrapper[4872]: E0127 06:56:06.661110 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:07.161086049 +0000 UTC m=+143.688561245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.661238 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:06 crc kubenswrapper[4872]: E0127 06:56:06.661496 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:07.16148849 +0000 UTC m=+143.688963676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.696367 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dx5vk"] Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.764791 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:06 crc kubenswrapper[4872]: E0127 06:56:06.765248 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:07.264989389 +0000 UTC m=+143.792464585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.765491 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:06 crc kubenswrapper[4872]: E0127 06:56:06.765764 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:07.265756421 +0000 UTC m=+143.793231617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.785400 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6k8gx"] Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.814575 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mmpq8"] Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.818831 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tktwd"] Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.868538 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:06 crc kubenswrapper[4872]: E0127 06:56:06.868983 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:07.36896383 +0000 UTC m=+143.896439036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.974330 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:06 crc kubenswrapper[4872]: E0127 06:56:06.974681 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:07.474669941 +0000 UTC m=+144.002145137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:06 crc kubenswrapper[4872]: I0127 06:56:06.975088 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-n6d8q"] Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.066779 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz"] Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.075620 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:07 crc kubenswrapper[4872]: E0127 06:56:07.075945 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:07.575912886 +0000 UTC m=+144.103388082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.084164 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-62544"] Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.085621 4872 csr.go:261] certificate signing request csr-w4kwk is approved, waiting to be issued Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.089310 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" event={"ID":"a31dcc31-ff38-40f9-b26d-fb3757f651c5","Type":"ContainerStarted","Data":"9caeef24e92a562a561c1d76c200ddd51875af403197ad8ea2ee32190600c0b7"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.091700 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" event={"ID":"cac19a4e-9870-4f41-8fd6-26126ab86c21","Type":"ContainerStarted","Data":"fc24159cb7c8edef8ab26cf772876ae5212386db5a9df8dae7e3332b96d80d17"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.093805 4872 csr.go:257] certificate signing request csr-w4kwk is issued Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.095402 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" event={"ID":"b5ca7461-c53d-4337-b195-1a7b3e1a0d71","Type":"ContainerStarted","Data":"9c93ebe3c43f823d9d68fb9f7048532f4f6c6bee31bd08b561543342e3aeae9a"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.108440 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh" event={"ID":"6d4456cc-8ca4-460c-a793-4c16fa6cbc07","Type":"ContainerStarted","Data":"6045dd6f75471b82255176e4d0eee52f5bc27986b8e45dbf5608fe061b422f30"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.114929 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6zmz" event={"ID":"8473f532-cab1-4b2b-8402-f4eec9d94bd2","Type":"ContainerStarted","Data":"887105dd620498c183bcc3b25d4516eb9be20e586e898b06165637932a29aae6"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.121603 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kp6fd" event={"ID":"34692f41-ceea-4bcf-a05d-8b0ceca661df","Type":"ContainerStarted","Data":"743ef08a9bae0ac2380c9cd6ab3a25ef99e7c640b5c62f2309a1e781dd907f44"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.123867 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" event={"ID":"652122b5-378f-48ab-9d77-d30cda97a77d","Type":"ContainerStarted","Data":"cf6b64454719f6b624b1ee999f39e9a1ef3553e6376b3d958d4280be160ba816"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.126822 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q"] Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.134605 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" event={"ID":"2ec6eaa6-0516-4dad-8481-ef8bf49935de","Type":"ContainerStarted","Data":"42c8ce3ead921d5e7d0ee7485300d0ffd22dc566bb578f0b2f0d37d6eaec36a3"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.137961 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.139338 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" event={"ID":"d579a88b-344f-4198-8070-bc7a7b73bbaf","Type":"ContainerStarted","Data":"957b095360c84d07685710265d6777fa34b68a818144cdcb8af3a5394d7b9ba9"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.140285 4872 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vs2xs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.140318 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" podUID="2ec6eaa6-0516-4dad-8481-ef8bf49935de" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.141051 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" event={"ID":"4b124376-3a32-4edb-b447-70fd2bd56e47","Type":"ContainerStarted","Data":"ec2ec2b5c864e94405a0ea3ade2a10438b65db10aecf9f76c8fdd394d2bbd08e"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.150727 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" event={"ID":"87c928c3-3313-4bb8-afe7-9438975f6f51","Type":"ContainerStarted","Data":"be0daaaa15cb9e2f86f34ee73c81c1a1f0f261dac1bf31d92cc41df24254c6ca"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.155037 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" event={"ID":"687ef6ff-0cea-474f-892a-fdca8d9e386e","Type":"ContainerStarted","Data":"7edaf99f34efd64d1c9ab28a5e32f3d9c4bce0f219e4948f8428d34ce9861ad6"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.163891 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" event={"ID":"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4","Type":"ContainerStarted","Data":"e4ee39c0b20e5721be6880348bf130aed828795bc0029be48cca1d44f2361c6c"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.164570 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb" event={"ID":"d1853449-b48b-49ed-aeee-4c0dce155450","Type":"ContainerStarted","Data":"5b4ae25c7bffc3603db522dfb3d3343107e76b5d2eb59e673c30f90c2377e173"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.173486 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gdn9v" event={"ID":"fe5c29a5-1e27-4a9c-8050-ee9c10d2d595","Type":"ContainerStarted","Data":"d9a7bfb3d8a1dbeb4a807fbee9765a97bfc9e9063d75f39b4830fa46540f485f"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.178007 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.179083 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck" event={"ID":"9db57d95-3aeb-4d52-81fe-d87a474adb7b","Type":"ContainerStarted","Data":"bcfd53d19ef5c1d6219736ba039d4de623f7ce22a93c3fb6d0fa0cc72ecdc0b1"} Jan 27 06:56:07 crc kubenswrapper[4872]: E0127 06:56:07.180305 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:07.6802928 +0000 UTC m=+144.207767986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.183894 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4" event={"ID":"5fc345c8-36d3-43cb-a15f-1c38b189047d","Type":"ContainerStarted","Data":"6d6cc432aed80f9e7e0f022c8f74e7be2acc46646ca048aefdaebd5695194941"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.189692 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dx5vk" event={"ID":"aa0d3672-506e-417a-8528-ba9e7ea8a2ba","Type":"ContainerStarted","Data":"cf406b9bdba4198e2172870fe7aa07150f57d4e30e9f9ac90e64d16d638e791a"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.198709 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6plnf" event={"ID":"fecd0f15-a29c-4508-af28-9169b2cf96b7","Type":"ContainerStarted","Data":"711fd3d4ee96307d34bc7867c977a94aa21fdd1371edba215787495a4ac3161a"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.201029 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vlcpq" event={"ID":"031efb47-2cb0-4323-896b-67cce30690ae","Type":"ContainerStarted","Data":"7e7766e7fb3289a0886cdab85c91c02f908de351afec9b437e130961f4ea822d"} Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.205981 4872 patch_prober.go:28] interesting pod/console-operator-58897d9998-phgkc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/readyz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.206033 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-phgkc" podUID="fa6b6759-481a-407c-97a8-d85918a467d7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/readyz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.286284 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:07 crc kubenswrapper[4872]: E0127 06:56:07.289164 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:07.789118278 +0000 UTC m=+144.316593474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.388891 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:07 crc kubenswrapper[4872]: E0127 06:56:07.389569 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:07.88955237 +0000 UTC m=+144.417027566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.495365 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.495721 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:07 crc kubenswrapper[4872]: E0127 06:56:07.496201 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:07.996184187 +0000 UTC m=+144.523659383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.508762 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:07 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:07 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:07 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.508818 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:07 crc kubenswrapper[4872]: W0127 06:56:07.573917 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3bd541e_8647_49ce_be1c_85f6e5c90f86.slice/crio-de72b1eddee8b40b1de238ff87a23b8beccc25319d7d648e22d6d8b034cb8b7d WatchSource:0}: Error finding container de72b1eddee8b40b1de238ff87a23b8beccc25319d7d648e22d6d8b034cb8b7d: Status 404 returned error can't find the container with id de72b1eddee8b40b1de238ff87a23b8beccc25319d7d648e22d6d8b034cb8b7d Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.596887 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:07 crc kubenswrapper[4872]: E0127 06:56:07.597880 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:08.097867235 +0000 UTC m=+144.625342431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.627249 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8xlp4" podStartSLOduration=123.627224073 podStartE2EDuration="2m3.627224073s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:07.604164032 +0000 UTC m=+144.131639238" watchObservedRunningTime="2026-01-27 06:56:07.627224073 +0000 UTC m=+144.154699269" Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.667081 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-phgkc" podStartSLOduration=123.667068056 podStartE2EDuration="2m3.667068056s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:07.666624964 +0000 UTC m=+144.194100160" watchObservedRunningTime="2026-01-27 06:56:07.667068056 +0000 UTC m=+144.194543252" Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.695585 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lxtl"] Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.705360 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:07 crc kubenswrapper[4872]: E0127 06:56:07.705531 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:08.205516181 +0000 UTC m=+144.732991377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.705666 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:07 crc kubenswrapper[4872]: E0127 06:56:07.710945 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:08.210925433 +0000 UTC m=+144.738400629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.715453 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-t9txz" podStartSLOduration=123.71543759 podStartE2EDuration="2m3.71543759s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:07.714235586 +0000 UTC m=+144.241710782" watchObservedRunningTime="2026-01-27 06:56:07.71543759 +0000 UTC m=+144.242912776" Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.725635 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc"] Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.753303 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nqgbm"] Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.810648 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:07 crc kubenswrapper[4872]: E0127 06:56:07.810863 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:08.31082373 +0000 UTC m=+144.838298916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.811054 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:07 crc kubenswrapper[4872]: E0127 06:56:07.811439 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:08.311427448 +0000 UTC m=+144.838902644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.823122 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz"] Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.825608 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" podStartSLOduration=123.825582447 podStartE2EDuration="2m3.825582447s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:07.802062093 +0000 UTC m=+144.329537289" watchObservedRunningTime="2026-01-27 06:56:07.825582447 +0000 UTC m=+144.353057643" Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.831331 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-sslz9" podStartSLOduration=123.831317398 podStartE2EDuration="2m3.831317398s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:07.82570355 +0000 UTC m=+144.353178746" watchObservedRunningTime="2026-01-27 06:56:07.831317398 +0000 UTC m=+144.358792594" Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.858467 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-ntnst" podStartSLOduration=123.858423812 podStartE2EDuration="2m3.858423812s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:07.854424569 +0000 UTC m=+144.381899785" watchObservedRunningTime="2026-01-27 06:56:07.858423812 +0000 UTC m=+144.385899008" Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.912933 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:07 crc kubenswrapper[4872]: E0127 06:56:07.913589 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:08.413574237 +0000 UTC m=+144.941049423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.935969 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7"] Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.937367 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-gdn9v" podStartSLOduration=123.937355298 podStartE2EDuration="2m3.937355298s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:07.932691996 +0000 UTC m=+144.460167212" watchObservedRunningTime="2026-01-27 06:56:07.937355298 +0000 UTC m=+144.464830494" Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.969033 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9"] Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.998107 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-27hjj" podStartSLOduration=123.99807376 podStartE2EDuration="2m3.99807376s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:07.967715444 +0000 UTC m=+144.495190640" watchObservedRunningTime="2026-01-27 06:56:07.99807376 +0000 UTC m=+144.525548956" Jan 27 06:56:07 crc kubenswrapper[4872]: I0127 06:56:07.999003 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ml5m6"] Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.021874 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:08 crc kubenswrapper[4872]: E0127 06:56:08.022200 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:08.5221806 +0000 UTC m=+145.049655796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.044854 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nk52m"] Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.063710 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7"] Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.068646 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-v6d6v"] Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.070452 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6"] Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.097122 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 06:51:07 +0000 UTC, rotation deadline is 2026-12-17 20:22:02.229685202 +0000 UTC Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.097150 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7789h25m54.132538097s for next certificate rotation Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.130428 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:08 crc kubenswrapper[4872]: E0127 06:56:08.130743 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:08.630729742 +0000 UTC m=+145.158204938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:08 crc kubenswrapper[4872]: W0127 06:56:08.222747 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d73faa6_98ab_4066_bf36_1f4a7609fd92.slice/crio-6b00244c70ce28a16dc95df5a9702dd01e54a4cf0821c8180635092cd6671090 WatchSource:0}: Error finding container 6b00244c70ce28a16dc95df5a9702dd01e54a4cf0821c8180635092cd6671090: Status 404 returned error can't find the container with id 6b00244c70ce28a16dc95df5a9702dd01e54a4cf0821c8180635092cd6671090 Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.238466 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:08 crc kubenswrapper[4872]: E0127 06:56:08.238865 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:08.7388533 +0000 UTC m=+145.266328496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.330745 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vlcpq" event={"ID":"031efb47-2cb0-4323-896b-67cce30690ae","Type":"ContainerStarted","Data":"5dbdc0a99e2388ea79ba2cb6cab6e6d4779d3cdcdb821e9cad6f8061c6fdbd85"} Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.338857 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n6d8q" event={"ID":"d3bd541e-8647-49ce-be1c-85f6e5c90f86","Type":"ContainerStarted","Data":"de72b1eddee8b40b1de238ff87a23b8beccc25319d7d648e22d6d8b034cb8b7d"} Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.339200 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:08 crc kubenswrapper[4872]: E0127 06:56:08.339442 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:08.839421976 +0000 UTC m=+145.366897172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.339569 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:08 crc kubenswrapper[4872]: E0127 06:56:08.339952 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:08.839938981 +0000 UTC m=+145.367414177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.387824 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" event={"ID":"86a2fead-ca68-40da-9054-4dd764d24686","Type":"ContainerStarted","Data":"b127b3e15a2581972c2b0f009079c9e171258d57ccbd816ba950c3ac65bb878f"} Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.433321 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" event={"ID":"687ef6ff-0cea-474f-892a-fdca8d9e386e","Type":"ContainerStarted","Data":"d4218a110036f4a8cb511986898a71a28cb7f9f28b1df98590f0e646b52c738a"} Jan 27 06:56:08 crc kubenswrapper[4872]: E0127 06:56:08.440979 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:08.940942859 +0000 UTC m=+145.468418055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.444922 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.445297 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.446302 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb" event={"ID":"d1853449-b48b-49ed-aeee-4c0dce155450","Type":"ContainerStarted","Data":"185800a10832024ad330b04ad0ab162be5fe11397be3eb95f6a715811d82e354"} Jan 27 06:56:08 crc kubenswrapper[4872]: E0127 06:56:08.446824 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:08.946811744 +0000 UTC m=+145.474286940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.448356 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc" event={"ID":"65eec553-40f9-421b-bc5b-fd94cfdc3eee","Type":"ContainerStarted","Data":"9f94df5640021a29167c742fae714a48ddab6789dc2470ffea92d6106bdefa98"} Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.451824 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9" event={"ID":"2ec25a43-b046-4e04-ab5c-a9cc62862bf9","Type":"ContainerStarted","Data":"8a10ae9e69540ce310c373444f8b5b80b212b6fe6b3770c21c355a2a9b0937ee"} Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.457069 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" event={"ID":"374ad670-12b0-4842-a6ca-1cbf355b5a99","Type":"ContainerStarted","Data":"dcffe5d606a0d4e1a55a6805801999c2e77db922966453fc76f6b3ed8834ea72"} Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.458026 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz" event={"ID":"138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba","Type":"ContainerStarted","Data":"70bf2bbdbba555b38a9feb7b97eb5cd73b513359cf05e31507e353159450aeda"} Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.461822 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6plnf" event={"ID":"fecd0f15-a29c-4508-af28-9169b2cf96b7","Type":"ContainerStarted","Data":"b39589f405ed9f8aab19329eba8ba9b01872d71fda141a3ae856a911968b74d6"} Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.462809 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6plnf" Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.466936 4872 patch_prober.go:28] interesting pod/downloads-7954f5f757-6plnf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.467004 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6plnf" podUID="fecd0f15-a29c-4508-af28-9169b2cf96b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.469936 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jghd5" event={"ID":"a413fe38-46c6-4603-96cc-4667937fe849","Type":"ContainerStarted","Data":"fcd2b459e6532f26983c8759a9aed485966dd758564e96ede6e7859dfdf698f4"} Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.474601 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nqgbm" event={"ID":"7b17ad7a-bf74-4ffd-ae08-f02802fdaa22","Type":"ContainerStarted","Data":"a50cd23fdae12331a27a83956a3c78ac5b7e142715133ce8a5a5cc5190a401c4"} Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.505369 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:08 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:08 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:08 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.505451 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.536309 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" event={"ID":"6221868d-3674-4d0f-9796-29338e188d50","Type":"ContainerStarted","Data":"31fece87b366afd70dfa21888953819b8ab47558ebcd3af2408929de188703f9"} Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.547253 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.547634 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh" event={"ID":"6d4456cc-8ca4-460c-a793-4c16fa6cbc07","Type":"ContainerStarted","Data":"3cf89642afbbeb58896b04736fcc12ebbbe9e27c2e08a5ef51bb047a88d57a56"} Jan 27 06:56:08 crc kubenswrapper[4872]: E0127 06:56:08.547775 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:09.04775444 +0000 UTC m=+145.575229646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.548391 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:08 crc kubenswrapper[4872]: E0127 06:56:08.549657 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:09.048928234 +0000 UTC m=+145.576403430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.585746 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" event={"ID":"3f173d12-704f-4431-aa56-05841c81d146","Type":"ContainerStarted","Data":"3e427632378313e81ede5b79f57d46cf78e24920a0878ae58f1c486c7a508a64"} Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.588900 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-6plnf" podStartSLOduration=124.588879321 podStartE2EDuration="2m4.588879321s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:08.585764663 +0000 UTC m=+145.113239859" watchObservedRunningTime="2026-01-27 06:56:08.588879321 +0000 UTC m=+145.116354527" Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.597468 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" event={"ID":"9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee","Type":"ContainerStarted","Data":"ab4ded9acfe70eecea5117f619f065e3600c571373b1eae5afa70ae8e424ef38"} Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.617731 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ml5m6" event={"ID":"be6179fb-ca66-4f99-9ddc-cd50f0952a1d","Type":"ContainerStarted","Data":"fd6b2777e941d34649a45583fdecce38d9b99a33a4fd1b90998eb6a820f27e43"} Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.637769 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" podStartSLOduration=124.637749069 podStartE2EDuration="2m4.637749069s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:08.632396548 +0000 UTC m=+145.159871764" watchObservedRunningTime="2026-01-27 06:56:08.637749069 +0000 UTC m=+145.165224265" Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.639748 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" event={"ID":"87c928c3-3313-4bb8-afe7-9438975f6f51","Type":"ContainerStarted","Data":"dc2227ef9220679f3a427f88c653df40e6505e9bafc26622efed211b2500e27d"} Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.642635 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q" event={"ID":"a8587651-0242-4cd8-b60d-1551b4908dfe","Type":"ContainerStarted","Data":"a72e1a5b2e4432a76de7fafe6a5d8795c395fe9729a90a40059a8121dfe4d880"} Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.648969 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-phgkc" Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.649753 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.652719 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:08 crc kubenswrapper[4872]: E0127 06:56:08.654172 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:09.154152471 +0000 UTC m=+145.681627667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.661632 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bw4wb" podStartSLOduration=124.661598261 podStartE2EDuration="2m4.661598261s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:08.659631625 +0000 UTC m=+145.187106821" watchObservedRunningTime="2026-01-27 06:56:08.661598261 +0000 UTC m=+145.189073457" Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.700882 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-vlcpq" podStartSLOduration=5.700833798 podStartE2EDuration="5.700833798s" podCreationTimestamp="2026-01-27 06:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:08.698947844 +0000 UTC m=+145.226423040" watchObservedRunningTime="2026-01-27 06:56:08.700833798 +0000 UTC m=+145.228308994" Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.754880 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:08 crc kubenswrapper[4872]: E0127 06:56:08.756373 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:09.256319632 +0000 UTC m=+145.783794828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.789014 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q5xfh" podStartSLOduration=124.788993084 podStartE2EDuration="2m4.788993084s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:08.75021248 +0000 UTC m=+145.277687676" watchObservedRunningTime="2026-01-27 06:56:08.788993084 +0000 UTC m=+145.316468290" Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.863252 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:08 crc kubenswrapper[4872]: E0127 06:56:08.863709 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:09.36369375 +0000 UTC m=+145.891168946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:08 crc kubenswrapper[4872]: I0127 06:56:08.965689 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:08 crc kubenswrapper[4872]: E0127 06:56:08.966134 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:09.466120578 +0000 UTC m=+145.993595784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.067623 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:09 crc kubenswrapper[4872]: E0127 06:56:09.068262 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:09.568235578 +0000 UTC m=+146.095710774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.068700 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:09 crc kubenswrapper[4872]: E0127 06:56:09.069096 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:09.569063951 +0000 UTC m=+146.096539147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.169595 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:09 crc kubenswrapper[4872]: E0127 06:56:09.169925 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:09.669907835 +0000 UTC m=+146.197383031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.271209 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:09 crc kubenswrapper[4872]: E0127 06:56:09.271654 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:09.771634603 +0000 UTC m=+146.299109869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.372321 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:09 crc kubenswrapper[4872]: E0127 06:56:09.372700 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:09.872685453 +0000 UTC m=+146.400160649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.476821 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:09 crc kubenswrapper[4872]: E0127 06:56:09.477222 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:09.97720872 +0000 UTC m=+146.504683916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.497312 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:09 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:09 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:09 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.497517 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.578996 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:09 crc kubenswrapper[4872]: E0127 06:56:09.579402 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:10.079388071 +0000 UTC m=+146.606863267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.663943 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz" event={"ID":"138d4fa5-ae1a-4bec-9ce7-adf0e0c2a9ba","Type":"ContainerStarted","Data":"838d4a5a665e4e8a3fdf7e8166183a342458d93ce6b98d3a9731d1c4002884f8"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.664234 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.664310 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.665397 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q" event={"ID":"a8587651-0242-4cd8-b60d-1551b4908dfe","Type":"ContainerStarted","Data":"37fc31118916e4fdb29079879d24d71ecf830069470c80b5b4a1961728fe38db"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.669015 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" event={"ID":"3f173d12-704f-4431-aa56-05841c81d146","Type":"ContainerStarted","Data":"776c5e3afacd27f953b84fb3cf133be94e7ac60aa42857d5f223c69411c79866"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.669571 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.673852 4872 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4lxtl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.673902 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" podUID="3f173d12-704f-4431-aa56-05841c81d146" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.675853 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v6d6v" event={"ID":"def7d421-8c6f-4694-96ed-ac9beaeed3f6","Type":"ContainerStarted","Data":"920d38c384c294510393674c696715be5cac945f6430dc845e5faf76011033ea"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.682572 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc" event={"ID":"65eec553-40f9-421b-bc5b-fd94cfdc3eee","Type":"ContainerStarted","Data":"407913d9dcd3e18b260f2bb14dc75994dcea15c71808b0eda7d4a57bf9b62bac"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.686675 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:09 crc kubenswrapper[4872]: E0127 06:56:09.687069 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:10.187041177 +0000 UTC m=+146.714516373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.689699 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" event={"ID":"d579a88b-344f-4198-8070-bc7a7b73bbaf","Type":"ContainerStarted","Data":"fdef0714db02d54cfe068554b0151fb3ce6dd43c4853555af034aa4cce5bd2fc"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.696169 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vqwgz" podStartSLOduration=125.696153135 podStartE2EDuration="2m5.696153135s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:09.694200709 +0000 UTC m=+146.221675915" watchObservedRunningTime="2026-01-27 06:56:09.696153135 +0000 UTC m=+146.223628331" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.704286 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" event={"ID":"9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee","Type":"ContainerStarted","Data":"a063d1deef61610ecd83f13889bd86bf9568b4ddff60edf0ea33dc4e8667ad47"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.717047 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" event={"ID":"a31dcc31-ff38-40f9-b26d-fb3757f651c5","Type":"ContainerStarted","Data":"142789687c52d212a5ca92d2a05edd9ae4edc1cdcccf56fe1e2e2d72b5ad87a0"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.718178 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.720515 4872 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vvsln container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.720573 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" podUID="a31dcc31-ff38-40f9-b26d-fb3757f651c5" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.729773 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck" event={"ID":"9db57d95-3aeb-4d52-81fe-d87a474adb7b","Type":"ContainerStarted","Data":"2fa7f661877dd0e8f754719e77e962e19e74a4d514fdecd8f4f2539d52a4e645"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.747236 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6zmz" event={"ID":"8473f532-cab1-4b2b-8402-f4eec9d94bd2","Type":"ContainerStarted","Data":"01a72e76683f2980b5960025d14f811390b2ca3d6b0250636f58722716248c6e"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.782226 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" event={"ID":"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4","Type":"ContainerStarted","Data":"f386be220afaddddf5ea643f6e93ef1675a15111df565c4dd44d9c89a5525e23"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.782590 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.784378 4872 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mmpq8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.784440 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" podUID="f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.791202 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.792114 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kp6fd" event={"ID":"34692f41-ceea-4bcf-a05d-8b0ceca661df","Type":"ContainerStarted","Data":"17f5246050558febf6855068a0cb62414a7b5508a530c6b92ba2c9dca6024744"} Jan 27 06:56:09 crc kubenswrapper[4872]: E0127 06:56:09.792508 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:10.29248701 +0000 UTC m=+146.819962206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.794195 4872 generic.go:334] "Generic (PLEG): container finished" podID="687ef6ff-0cea-474f-892a-fdca8d9e386e" containerID="d4218a110036f4a8cb511986898a71a28cb7f9f28b1df98590f0e646b52c738a" exitCode=0 Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.794245 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" event={"ID":"687ef6ff-0cea-474f-892a-fdca8d9e386e","Type":"ContainerDied","Data":"d4218a110036f4a8cb511986898a71a28cb7f9f28b1df98590f0e646b52c738a"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.804979 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dn2vc" podStartSLOduration=125.804961372 podStartE2EDuration="2m5.804961372s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:09.747135902 +0000 UTC m=+146.274611108" watchObservedRunningTime="2026-01-27 06:56:09.804961372 +0000 UTC m=+146.332436568" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.806096 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" podStartSLOduration=125.806091554 podStartE2EDuration="2m5.806091554s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:09.803745258 +0000 UTC m=+146.331220454" watchObservedRunningTime="2026-01-27 06:56:09.806091554 +0000 UTC m=+146.333566750" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.811031 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" event={"ID":"36c9a520-9452-469d-b65e-635f7ea74105","Type":"ContainerStarted","Data":"001ff6d000f7bc4cfc342aeab81bd0adcc3afc7f1076053f6bed12c654375264"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.813641 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" event={"ID":"87c928c3-3313-4bb8-afe7-9438975f6f51","Type":"ContainerStarted","Data":"945acaffe740b9eff93867864e0d1f9d5611bf1db3d3581554ba37ef28678b3b"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.829515 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dx5vk" event={"ID":"aa0d3672-506e-417a-8528-ba9e7ea8a2ba","Type":"ContainerStarted","Data":"80b04d76876d3a23e629ba8044fefed2a23d0873524f2f8831e95cdc02aec3dd"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.840228 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n6d8q" event={"ID":"d3bd541e-8647-49ce-be1c-85f6e5c90f86","Type":"ContainerStarted","Data":"8f5da4b0763876e8bc61992a4e310f0e86e350bc475c3521813bbd134a350fea"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.850442 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" event={"ID":"6221868d-3674-4d0f-9796-29338e188d50","Type":"ContainerStarted","Data":"9dfd4bdf0fa105542b89ec85df48657eaba70b508732c7d9bfba991bb1386bc0"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.850868 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.851971 4872 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-76vrz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.851999 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" podUID="6221868d-3674-4d0f-9796-29338e188d50" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.852381 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4" event={"ID":"5fc345c8-36d3-43cb-a15f-1c38b189047d","Type":"ContainerStarted","Data":"9ade026cfd1aae21eae07df46dc47417b2b7026e8098df81c7d2e675e90829e4"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.866111 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" podStartSLOduration=125.866099416 podStartE2EDuration="2m5.866099416s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:09.865354515 +0000 UTC m=+146.392829711" watchObservedRunningTime="2026-01-27 06:56:09.866099416 +0000 UTC m=+146.393574612" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.881365 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" event={"ID":"4b124376-3a32-4edb-b447-70fd2bd56e47","Type":"ContainerStarted","Data":"3ee5bdd9af41a483183209d442b091e46049d253fe1f0df5b06d606c407696d0"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.882431 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.885229 4872 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-888dw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.885255 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" podUID="4b124376-3a32-4edb-b447-70fd2bd56e47" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.892877 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:09 crc kubenswrapper[4872]: E0127 06:56:09.895115 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:10.395099665 +0000 UTC m=+146.922574941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.920609 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-nk52m" event={"ID":"5d73faa6-98ab-4066-bf36-1f4a7609fd92","Type":"ContainerStarted","Data":"ff635c2bfb57c9295974ce02941d182d314a9b6bd687f1a09b764b2979719fd6"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.920652 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-nk52m" event={"ID":"5d73faa6-98ab-4066-bf36-1f4a7609fd92","Type":"ContainerStarted","Data":"6b00244c70ce28a16dc95df5a9702dd01e54a4cf0821c8180635092cd6671090"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.923164 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7" event={"ID":"cfedc4d3-7528-4088-ad69-171d2ec1ba14","Type":"ContainerStarted","Data":"b27773a9955609b105d6ff6189d28480c15dd5a0e690d8b3e3baa67ab31544ab"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.923200 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7" event={"ID":"cfedc4d3-7528-4088-ad69-171d2ec1ba14","Type":"ContainerStarted","Data":"b0cd555cad9c97a3556f347d2b18e03795ae99543cccdd9e0fc0e387ec809381"} Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.930199 4872 patch_prober.go:28] interesting pod/downloads-7954f5f757-6plnf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.930233 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6plnf" podUID="fecd0f15-a29c-4508-af28-9169b2cf96b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.932025 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lqbck" podStartSLOduration=125.932015845 podStartE2EDuration="2m5.932015845s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:09.930472892 +0000 UTC m=+146.457948088" watchObservedRunningTime="2026-01-27 06:56:09.932015845 +0000 UTC m=+146.459491041" Jan 27 06:56:09 crc kubenswrapper[4872]: I0127 06:56:09.995928 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:09 crc kubenswrapper[4872]: E0127 06:56:09.997292 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:10.497269306 +0000 UTC m=+147.024744542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.007425 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" podStartSLOduration=126.007406631 podStartE2EDuration="2m6.007406631s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:10.007133304 +0000 UTC m=+146.534608500" watchObservedRunningTime="2026-01-27 06:56:10.007406631 +0000 UTC m=+146.534881827" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.016635 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" podStartSLOduration=126.01661536 podStartE2EDuration="2m6.01661536s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:09.973895686 +0000 UTC m=+146.501370882" watchObservedRunningTime="2026-01-27 06:56:10.01661536 +0000 UTC m=+146.544090556" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.046915 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mzjg4" podStartSLOduration=126.046893365 podStartE2EDuration="2m6.046893365s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:10.045421443 +0000 UTC m=+146.572896649" watchObservedRunningTime="2026-01-27 06:56:10.046893365 +0000 UTC m=+146.574368561" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.103468 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:10 crc kubenswrapper[4872]: E0127 06:56:10.105772 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:10.605759484 +0000 UTC m=+147.133234680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.210046 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:10 crc kubenswrapper[4872]: E0127 06:56:10.210457 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:10.710442397 +0000 UTC m=+147.237917593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.210766 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b7zbz" podStartSLOduration=126.210757535 podStartE2EDuration="2m6.210757535s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:10.119896033 +0000 UTC m=+146.647371229" watchObservedRunningTime="2026-01-27 06:56:10.210757535 +0000 UTC m=+146.738232731" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.211040 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" podStartSLOduration=126.211035253 podStartE2EDuration="2m6.211035253s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:10.209992104 +0000 UTC m=+146.737467300" watchObservedRunningTime="2026-01-27 06:56:10.211035253 +0000 UTC m=+146.738510449" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.224087 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.238508 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-nk52m" podStartSLOduration=126.238490127 podStartE2EDuration="2m6.238490127s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:10.237441068 +0000 UTC m=+146.764916274" watchObservedRunningTime="2026-01-27 06:56:10.238490127 +0000 UTC m=+146.765965323" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.313528 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:10 crc kubenswrapper[4872]: E0127 06:56:10.314267 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:10.814248404 +0000 UTC m=+147.341723610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.332600 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dx5vk" podStartSLOduration=126.332583591 podStartE2EDuration="2m6.332583591s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:10.330522723 +0000 UTC m=+146.857997919" watchObservedRunningTime="2026-01-27 06:56:10.332583591 +0000 UTC m=+146.860058787" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.392093 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qd9g7" podStartSLOduration=126.392075229 podStartE2EDuration="2m6.392075229s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:10.391997907 +0000 UTC m=+146.919473103" watchObservedRunningTime="2026-01-27 06:56:10.392075229 +0000 UTC m=+146.919550425" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.431440 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:10 crc kubenswrapper[4872]: E0127 06:56:10.431836 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:10.931817879 +0000 UTC m=+147.459293075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.499595 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:10 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:10 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:10 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.499978 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.533630 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:10 crc kubenswrapper[4872]: E0127 06:56:10.534057 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:11.034040582 +0000 UTC m=+147.561515778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.634631 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:10 crc kubenswrapper[4872]: E0127 06:56:10.635043 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:11.135023809 +0000 UTC m=+147.662499005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.736504 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:10 crc kubenswrapper[4872]: E0127 06:56:10.736935 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:11.236914813 +0000 UTC m=+147.764390009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.837435 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:10 crc kubenswrapper[4872]: E0127 06:56:10.837617 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:11.337586862 +0000 UTC m=+147.865062068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.837715 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:10 crc kubenswrapper[4872]: E0127 06:56:10.838039 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:11.338024204 +0000 UTC m=+147.865499450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.932112 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" event={"ID":"687ef6ff-0cea-474f-892a-fdca8d9e386e","Type":"ContainerStarted","Data":"9f37ca7f9640f8e3c7e2577d486e6b99db8050b106a9ca37dcb670c5d34bb093"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.932244 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.938558 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" event={"ID":"374ad670-12b0-4842-a6ca-1cbf355b5a99","Type":"ContainerStarted","Data":"2e6275f843b31549a339b27a3ebc134ad73a09fa054de9d8ff92c4fa982ce685"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.938621 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:10 crc kubenswrapper[4872]: E0127 06:56:10.938748 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:11.438728543 +0000 UTC m=+147.966203739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.938974 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:10 crc kubenswrapper[4872]: E0127 06:56:10.939428 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:11.439413363 +0000 UTC m=+147.966888559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.941803 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" event={"ID":"36c9a520-9452-469d-b65e-635f7ea74105","Type":"ContainerStarted","Data":"c4af6175e82dc2def45a90192e85375f4f77ce9ec243d5bf469fa32034d07302"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.942028 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.943152 4872 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qlql6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.943200 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" podUID="36c9a520-9452-469d-b65e-635f7ea74105" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.944549 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" event={"ID":"d579a88b-344f-4198-8070-bc7a7b73bbaf","Type":"ContainerStarted","Data":"ff64c5fac491f5b5d6913eea2824e1ea80789f993861d3bd516b19ccf13546e3"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.947365 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jghd5" event={"ID":"a413fe38-46c6-4603-96cc-4667937fe849","Type":"ContainerStarted","Data":"d30f644966ae32e78ead26902235e99e8018b04ea5772ff1cf140a15df5aa9eb"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.950187 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6zmz" event={"ID":"8473f532-cab1-4b2b-8402-f4eec9d94bd2","Type":"ContainerStarted","Data":"f8ade3ef5c750a6e7b2a3007e220b1c1e8c67019b8e1d9afee27246144b340f0"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.951988 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q" event={"ID":"a8587651-0242-4cd8-b60d-1551b4908dfe","Type":"ContainerStarted","Data":"d168b6e94bbcea2c9d4579061dd58f20fe6a5c1fb5a91616c938eae3a39d3bb9"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.954035 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" event={"ID":"b5ca7461-c53d-4337-b195-1a7b3e1a0d71","Type":"ContainerStarted","Data":"b9e190f5a9d741a6facf7a73a2785ded399fe9675b2064298dcc44286681eb89"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.955307 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nqgbm" event={"ID":"7b17ad7a-bf74-4ffd-ae08-f02802fdaa22","Type":"ContainerStarted","Data":"6ae4643d1e7f3f0716b0a667c5492069607a2bbd6124a32216e7a8cba51c42ad"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.957574 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kp6fd" event={"ID":"34692f41-ceea-4bcf-a05d-8b0ceca661df","Type":"ContainerStarted","Data":"56bec2838e2410d47f582e61feaa1b34fd74627024dc15cf6a98599517c24bbc"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.960122 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" event={"ID":"9b946d93-fe1b-4909-bb68-b0ad4eb3e2ee","Type":"ContainerStarted","Data":"8c4caae3f94cc04e4616d2acd93f7193af08355f618606ae63ba8ad3916f8cf5"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.962458 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9" event={"ID":"2ec25a43-b046-4e04-ab5c-a9cc62862bf9","Type":"ContainerStarted","Data":"99092080623de876ec66d395d6582c485a2ccc6f393d7c0b3639c5508e4a01fe"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.962504 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9" event={"ID":"2ec25a43-b046-4e04-ab5c-a9cc62862bf9","Type":"ContainerStarted","Data":"d9b80657042b6a00499a72a6c0f009f7819584bcce2865dddf39d46679051194"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.962756 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.964005 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" podStartSLOduration=126.963985246 podStartE2EDuration="2m6.963985246s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:10.960015403 +0000 UTC m=+147.487490629" watchObservedRunningTime="2026-01-27 06:56:10.963985246 +0000 UTC m=+147.491460432" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.967077 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v6d6v" event={"ID":"def7d421-8c6f-4694-96ed-ac9beaeed3f6","Type":"ContainerStarted","Data":"bc17b8980b7f1aaf88f51edc355ce8bff91837e7e7434009a8c43ba5c9b02840"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.967259 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v6d6v" event={"ID":"def7d421-8c6f-4694-96ed-ac9beaeed3f6","Type":"ContainerStarted","Data":"ce7304f3699e78e30357bd10e5c5205810005ff7f28b15538abcd30d89af5846"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.970983 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ml5m6" event={"ID":"be6179fb-ca66-4f99-9ddc-cd50f0952a1d","Type":"ContainerStarted","Data":"0e9626c82f613d556b257044af456b04996230637a54a64ac86f540f1a1dae81"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.971033 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ml5m6" event={"ID":"be6179fb-ca66-4f99-9ddc-cd50f0952a1d","Type":"ContainerStarted","Data":"9db29d410dab31f980d50675c5595643b60f3f0dac2eaef0957f08b2aa635f9d"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.974456 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n6d8q" event={"ID":"d3bd541e-8647-49ce-be1c-85f6e5c90f86","Type":"ContainerStarted","Data":"6d19bf0b29fdc2cb07575a8ccd6c41c52a1d074ed08c22b6c4f37534059a9a78"} Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.974622 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-n6d8q" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.975728 4872 patch_prober.go:28] interesting pod/downloads-7954f5f757-6plnf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.976718 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6plnf" podUID="fecd0f15-a29c-4508-af28-9169b2cf96b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.976874 4872 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-76vrz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.976883 4872 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mmpq8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.976990 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" podUID="f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.976926 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" podUID="6221868d-3674-4d0f-9796-29338e188d50" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.977322 4872 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4lxtl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.977466 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" podUID="3f173d12-704f-4431-aa56-05841c81d146" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.987382 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8pnkr" Jan 27 06:56:10 crc kubenswrapper[4872]: I0127 06:56:10.992725 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p6zmz" podStartSLOduration=126.992693586 podStartE2EDuration="2m6.992693586s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:10.991514982 +0000 UTC m=+147.518990178" watchObservedRunningTime="2026-01-27 06:56:10.992693586 +0000 UTC m=+147.520168792" Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.041125 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:11 crc kubenswrapper[4872]: E0127 06:56:11.042385 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:11.542365106 +0000 UTC m=+148.069840302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.066603 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-888dw" Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.074713 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" podStartSLOduration=127.074694617 podStartE2EDuration="2m7.074694617s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:11.038530047 +0000 UTC m=+147.566005243" watchObservedRunningTime="2026-01-27 06:56:11.074694617 +0000 UTC m=+147.602169813" Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.076964 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-tktwd" podStartSLOduration=127.076955922 podStartE2EDuration="2m7.076955922s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:11.066235989 +0000 UTC m=+147.593711185" watchObservedRunningTime="2026-01-27 06:56:11.076955922 +0000 UTC m=+147.604431118" Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.147350 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:11 crc kubenswrapper[4872]: E0127 06:56:11.147921 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:11.647907932 +0000 UTC m=+148.175383128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.149791 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w5f8q" podStartSLOduration=127.149769575 podStartE2EDuration="2m7.149769575s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:11.121396415 +0000 UTC m=+147.648871611" watchObservedRunningTime="2026-01-27 06:56:11.149769575 +0000 UTC m=+147.677244771" Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.223461 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jghd5" podStartSLOduration=127.223445482 podStartE2EDuration="2m7.223445482s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:11.221056025 +0000 UTC m=+147.748531241" watchObservedRunningTime="2026-01-27 06:56:11.223445482 +0000 UTC m=+147.750920678" Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.257668 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:11 crc kubenswrapper[4872]: E0127 06:56:11.258032 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:11.758015607 +0000 UTC m=+148.285490803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.312279 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-kp6fd" podStartSLOduration=127.312260957 podStartE2EDuration="2m7.312260957s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:11.311435533 +0000 UTC m=+147.838910729" watchObservedRunningTime="2026-01-27 06:56:11.312260957 +0000 UTC m=+147.839736153" Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.359293 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:11 crc kubenswrapper[4872]: E0127 06:56:11.359691 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:11.859678554 +0000 UTC m=+148.387153740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.433273 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-nqgbm" podStartSLOduration=8.433254078 podStartE2EDuration="8.433254078s" podCreationTimestamp="2026-01-27 06:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:11.411221358 +0000 UTC m=+147.938696554" watchObservedRunningTime="2026-01-27 06:56:11.433254078 +0000 UTC m=+147.960729274" Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.460298 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:11 crc kubenswrapper[4872]: E0127 06:56:11.460797 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:11.960763884 +0000 UTC m=+148.488239110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.502341 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:11 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:11 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:11 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.502413 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.535939 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" podStartSLOduration=127.535916524 podStartE2EDuration="2m7.535916524s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:11.477449255 +0000 UTC m=+148.004924451" watchObservedRunningTime="2026-01-27 06:56:11.535916524 +0000 UTC m=+148.063391720" Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.562188 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:11 crc kubenswrapper[4872]: E0127 06:56:11.562596 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:12.062580326 +0000 UTC m=+148.590055522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.648416 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v6d6v" podStartSLOduration=127.648382595 podStartE2EDuration="2m7.648382595s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:11.64394314 +0000 UTC m=+148.171418336" watchObservedRunningTime="2026-01-27 06:56:11.648382595 +0000 UTC m=+148.175857791" Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.663338 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:11 crc kubenswrapper[4872]: E0127 06:56:11.663539 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:12.163513512 +0000 UTC m=+148.690988708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.664053 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:11 crc kubenswrapper[4872]: E0127 06:56:11.664417 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:12.164406367 +0000 UTC m=+148.691881563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.682984 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-ml5m6" podStartSLOduration=127.682967731 podStartE2EDuration="2m7.682967731s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:11.681608922 +0000 UTC m=+148.209084128" watchObservedRunningTime="2026-01-27 06:56:11.682967731 +0000 UTC m=+148.210442927" Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.728488 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-n6d8q" podStartSLOduration=9.728469303 podStartE2EDuration="9.728469303s" podCreationTimestamp="2026-01-27 06:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:11.727350292 +0000 UTC m=+148.254825488" watchObservedRunningTime="2026-01-27 06:56:11.728469303 +0000 UTC m=+148.255944499" Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.765518 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:11 crc kubenswrapper[4872]: E0127 06:56:11.766061 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:12.266020312 +0000 UTC m=+148.793495508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.807900 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-62544" podStartSLOduration=127.807882123 podStartE2EDuration="2m7.807882123s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:11.804676343 +0000 UTC m=+148.332151529" watchObservedRunningTime="2026-01-27 06:56:11.807882123 +0000 UTC m=+148.335357329" Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.866790 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:11 crc kubenswrapper[4872]: E0127 06:56:11.867202 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:12.367188705 +0000 UTC m=+148.894663901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.903544 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9" podStartSLOduration=127.90352709 podStartE2EDuration="2m7.90352709s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:11.900630468 +0000 UTC m=+148.428105684" watchObservedRunningTime="2026-01-27 06:56:11.90352709 +0000 UTC m=+148.431002286" Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.968376 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:11 crc kubenswrapper[4872]: E0127 06:56:11.968549 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:12.468523313 +0000 UTC m=+148.995998509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.969361 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:11 crc kubenswrapper[4872]: E0127 06:56:11.969735 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:12.469721606 +0000 UTC m=+148.997196802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.976827 4872 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vvsln container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.976906 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" podUID="a31dcc31-ff38-40f9-b26d-fb3757f651c5" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 06:56:11 crc kubenswrapper[4872]: I0127 06:56:11.995573 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qlql6" Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.072672 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.074027 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:56:12 crc kubenswrapper[4872]: E0127 06:56:12.080482 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:12.57483668 +0000 UTC m=+149.102311876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.110731 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.176193 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.177546 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.177653 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.178336 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:56:12 crc kubenswrapper[4872]: E0127 06:56:12.183206 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:12.683184306 +0000 UTC m=+149.210659502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.184036 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.194566 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.202250 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.280142 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:12 crc kubenswrapper[4872]: E0127 06:56:12.280896 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:12.7808243 +0000 UTC m=+149.308299496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.382067 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:12 crc kubenswrapper[4872]: E0127 06:56:12.382690 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:12.882672701 +0000 UTC m=+149.410147897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.419308 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.426466 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.440602 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.495456 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:12 crc kubenswrapper[4872]: E0127 06:56:12.495937 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:12.995908324 +0000 UTC m=+149.523383520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.497665 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:12 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:12 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:12 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.497804 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.503498 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:12 crc kubenswrapper[4872]: E0127 06:56:12.504126 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:13.004107635 +0000 UTC m=+149.531582831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:12 crc kubenswrapper[4872]: E0127 06:56:12.604625 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:13.10460192 +0000 UTC m=+149.632077116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.604661 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.604880 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:12 crc kubenswrapper[4872]: E0127 06:56:12.605161 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:13.105154825 +0000 UTC m=+149.632630021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.705632 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:12 crc kubenswrapper[4872]: E0127 06:56:12.706002 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:13.205987558 +0000 UTC m=+149.733462754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.808507 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:12 crc kubenswrapper[4872]: E0127 06:56:12.809304 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:13.309289061 +0000 UTC m=+149.836764267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.922040 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:12 crc kubenswrapper[4872]: E0127 06:56:12.922339 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:13.422324709 +0000 UTC m=+149.949799905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.981139 4872 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vvsln container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 06:56:12 crc kubenswrapper[4872]: I0127 06:56:12.981185 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" podUID="a31dcc31-ff38-40f9-b26d-fb3757f651c5" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 06:56:13 crc kubenswrapper[4872]: I0127 06:56:13.020984 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" event={"ID":"b5ca7461-c53d-4337-b195-1a7b3e1a0d71","Type":"ContainerStarted","Data":"ed3390e328e7ec83653741e516a6b8e0e89804bcc953c59b4e1b62052d5447fd"} Jan 27 06:56:13 crc kubenswrapper[4872]: I0127 06:56:13.021026 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" event={"ID":"b5ca7461-c53d-4337-b195-1a7b3e1a0d71","Type":"ContainerStarted","Data":"ec093acd4f13d0ed08bef6af407f27678fdd49ce372310ace6cb1432a571b282"} Jan 27 06:56:13 crc kubenswrapper[4872]: I0127 06:56:13.025746 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:13 crc kubenswrapper[4872]: E0127 06:56:13.026055 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:13.526043324 +0000 UTC m=+150.053518520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:13 crc kubenswrapper[4872]: I0127 06:56:13.127354 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:13 crc kubenswrapper[4872]: E0127 06:56:13.128232 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:13.628217335 +0000 UTC m=+150.155692531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:13 crc kubenswrapper[4872]: I0127 06:56:13.228784 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:13 crc kubenswrapper[4872]: E0127 06:56:13.229188 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:13.729171602 +0000 UTC m=+150.256646798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:13 crc kubenswrapper[4872]: I0127 06:56:13.333290 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:13 crc kubenswrapper[4872]: E0127 06:56:13.333669 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:13.833654058 +0000 UTC m=+150.361129254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:13 crc kubenswrapper[4872]: I0127 06:56:13.435022 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:13 crc kubenswrapper[4872]: E0127 06:56:13.435398 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:13.935383816 +0000 UTC m=+150.462859002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:13 crc kubenswrapper[4872]: I0127 06:56:13.498649 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:13 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:13 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:13 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:13 crc kubenswrapper[4872]: I0127 06:56:13.498701 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:13 crc kubenswrapper[4872]: I0127 06:56:13.535606 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:13 crc kubenswrapper[4872]: E0127 06:56:13.535938 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:14.035919412 +0000 UTC m=+150.563394608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:13 crc kubenswrapper[4872]: I0127 06:56:13.637478 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:13 crc kubenswrapper[4872]: E0127 06:56:13.637862 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:14.137829765 +0000 UTC m=+150.665304961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:13 crc kubenswrapper[4872]: W0127 06:56:13.644971 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-afb1834f007f63a40e8d079f261258788b943d242ab12164f44e14a6960ff915 WatchSource:0}: Error finding container afb1834f007f63a40e8d079f261258788b943d242ab12164f44e14a6960ff915: Status 404 returned error can't find the container with id afb1834f007f63a40e8d079f261258788b943d242ab12164f44e14a6960ff915 Jan 27 06:56:13 crc kubenswrapper[4872]: I0127 06:56:13.738106 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:13 crc kubenswrapper[4872]: E0127 06:56:13.738342 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:14.238299419 +0000 UTC m=+150.765774625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:13 crc kubenswrapper[4872]: I0127 06:56:13.738421 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:13 crc kubenswrapper[4872]: E0127 06:56:13.738736 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:14.238723481 +0000 UTC m=+150.766198677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:13 crc kubenswrapper[4872]: I0127 06:56:13.839545 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:13 crc kubenswrapper[4872]: E0127 06:56:13.839722 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:14.339692338 +0000 UTC m=+150.867167534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:13 crc kubenswrapper[4872]: I0127 06:56:13.839852 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:13 crc kubenswrapper[4872]: E0127 06:56:13.840111 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:14.340099889 +0000 UTC m=+150.867575085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:13 crc kubenswrapper[4872]: I0127 06:56:13.940909 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:13 crc kubenswrapper[4872]: E0127 06:56:13.941100 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:14.441062606 +0000 UTC m=+150.968537802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.026217 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5e2c42ff401d06cf26e4a17c3de320628badb36acd23b4509107cc1a48746b77"} Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.028116 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"afb1834f007f63a40e8d079f261258788b943d242ab12164f44e14a6960ff915"} Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.029123 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f220b07356ab72254fcf26cb90c609c8d9134683b575ab95146098d245345d17"} Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.042083 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:14 crc kubenswrapper[4872]: E0127 06:56:14.042617 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:14.542596349 +0000 UTC m=+151.070071615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.142932 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:14 crc kubenswrapper[4872]: E0127 06:56:14.143100 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:14.643079903 +0000 UTC m=+151.170555099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.143158 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:14 crc kubenswrapper[4872]: E0127 06:56:14.143892 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:14.643874015 +0000 UTC m=+151.171349211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.243877 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:14 crc kubenswrapper[4872]: E0127 06:56:14.244059 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:14.74403561 +0000 UTC m=+151.271510806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.245744 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:14 crc kubenswrapper[4872]: E0127 06:56:14.246079 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:14.746070127 +0000 UTC m=+151.273545323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.255725 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mm92d"] Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.258072 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm92d" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.263361 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.303595 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mm92d"] Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.347488 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.347913 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fpw2\" (UniqueName: \"kubernetes.io/projected/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-kube-api-access-8fpw2\") pod \"certified-operators-mm92d\" (UID: \"7e6a557b-5fc3-46e7-9799-1b75567c2dc6\") " pod="openshift-marketplace/certified-operators-mm92d" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.348028 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-catalog-content\") pod \"certified-operators-mm92d\" (UID: \"7e6a557b-5fc3-46e7-9799-1b75567c2dc6\") " pod="openshift-marketplace/certified-operators-mm92d" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.348112 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-utilities\") pod \"certified-operators-mm92d\" (UID: \"7e6a557b-5fc3-46e7-9799-1b75567c2dc6\") " pod="openshift-marketplace/certified-operators-mm92d" Jan 27 06:56:14 crc kubenswrapper[4872]: E0127 06:56:14.348311 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:14.848294389 +0000 UTC m=+151.375769595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.450255 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-utilities\") pod \"certified-operators-mm92d\" (UID: \"7e6a557b-5fc3-46e7-9799-1b75567c2dc6\") " pod="openshift-marketplace/certified-operators-mm92d" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.450617 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:14 crc kubenswrapper[4872]: E0127 06:56:14.451024 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:14.951008987 +0000 UTC m=+151.478484183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.451033 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-utilities\") pod \"certified-operators-mm92d\" (UID: \"7e6a557b-5fc3-46e7-9799-1b75567c2dc6\") " pod="openshift-marketplace/certified-operators-mm92d" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.451573 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fpw2\" (UniqueName: \"kubernetes.io/projected/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-kube-api-access-8fpw2\") pod \"certified-operators-mm92d\" (UID: \"7e6a557b-5fc3-46e7-9799-1b75567c2dc6\") " pod="openshift-marketplace/certified-operators-mm92d" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.452043 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-catalog-content\") pod \"certified-operators-mm92d\" (UID: \"7e6a557b-5fc3-46e7-9799-1b75567c2dc6\") " pod="openshift-marketplace/certified-operators-mm92d" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.452520 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-catalog-content\") pod \"certified-operators-mm92d\" (UID: \"7e6a557b-5fc3-46e7-9799-1b75567c2dc6\") " pod="openshift-marketplace/certified-operators-mm92d" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.461063 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9k9dq"] Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.462123 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.481638 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.497079 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:14 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:14 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:14 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.497464 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.509570 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fnkf9" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.553196 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fpw2\" (UniqueName: \"kubernetes.io/projected/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-kube-api-access-8fpw2\") pod \"certified-operators-mm92d\" (UID: \"7e6a557b-5fc3-46e7-9799-1b75567c2dc6\") " pod="openshift-marketplace/certified-operators-mm92d" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.553654 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:14 crc kubenswrapper[4872]: E0127 06:56:14.553769 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:15.053750453 +0000 UTC m=+151.581225639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.554084 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8pkc\" (UniqueName: \"kubernetes.io/projected/22382085-00d5-42cd-97fb-098b131498d6-kube-api-access-n8pkc\") pod \"community-operators-9k9dq\" (UID: \"22382085-00d5-42cd-97fb-098b131498d6\") " pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.554263 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22382085-00d5-42cd-97fb-098b131498d6-catalog-content\") pod \"community-operators-9k9dq\" (UID: \"22382085-00d5-42cd-97fb-098b131498d6\") " pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.554400 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22382085-00d5-42cd-97fb-098b131498d6-utilities\") pod \"community-operators-9k9dq\" (UID: \"22382085-00d5-42cd-97fb-098b131498d6\") " pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.554521 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:14 crc kubenswrapper[4872]: E0127 06:56:14.555275 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:15.055263506 +0000 UTC m=+151.582738702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.575629 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm92d" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.620384 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.620432 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.622121 4872 patch_prober.go:28] interesting pod/console-f9d7485db-ntnst container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.5:8443/health\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.622165 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ntnst" podUID="8cfa7f72-9e39-485f-894a-276893a688e1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.5:8443/health\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.655388 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:14 crc kubenswrapper[4872]: E0127 06:56:14.655704 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:15.155686678 +0000 UTC m=+151.683161864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.656033 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8pkc\" (UniqueName: \"kubernetes.io/projected/22382085-00d5-42cd-97fb-098b131498d6-kube-api-access-n8pkc\") pod \"community-operators-9k9dq\" (UID: \"22382085-00d5-42cd-97fb-098b131498d6\") " pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.657735 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22382085-00d5-42cd-97fb-098b131498d6-catalog-content\") pod \"community-operators-9k9dq\" (UID: \"22382085-00d5-42cd-97fb-098b131498d6\") " pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.658240 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22382085-00d5-42cd-97fb-098b131498d6-utilities\") pod \"community-operators-9k9dq\" (UID: \"22382085-00d5-42cd-97fb-098b131498d6\") " pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.658349 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.658164 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22382085-00d5-42cd-97fb-098b131498d6-catalog-content\") pod \"community-operators-9k9dq\" (UID: \"22382085-00d5-42cd-97fb-098b131498d6\") " pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:56:14 crc kubenswrapper[4872]: E0127 06:56:14.659186 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:15.159177016 +0000 UTC m=+151.686652212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.659555 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22382085-00d5-42cd-97fb-098b131498d6-utilities\") pod \"community-operators-9k9dq\" (UID: \"22382085-00d5-42cd-97fb-098b131498d6\") " pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.669471 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hthrz"] Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.671024 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.753899 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8pkc\" (UniqueName: \"kubernetes.io/projected/22382085-00d5-42cd-97fb-098b131498d6-kube-api-access-n8pkc\") pod \"community-operators-9k9dq\" (UID: \"22382085-00d5-42cd-97fb-098b131498d6\") " pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.759783 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.760094 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggk9r\" (UniqueName: \"kubernetes.io/projected/a605f814-0bb0-4f00-88c9-57a7c47f7ece-kube-api-access-ggk9r\") pod \"certified-operators-hthrz\" (UID: \"a605f814-0bb0-4f00-88c9-57a7c47f7ece\") " pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.760176 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a605f814-0bb0-4f00-88c9-57a7c47f7ece-utilities\") pod \"certified-operators-hthrz\" (UID: \"a605f814-0bb0-4f00-88c9-57a7c47f7ece\") " pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.760210 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a605f814-0bb0-4f00-88c9-57a7c47f7ece-catalog-content\") pod \"certified-operators-hthrz\" (UID: \"a605f814-0bb0-4f00-88c9-57a7c47f7ece\") " pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:56:14 crc kubenswrapper[4872]: E0127 06:56:14.760512 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:15.260487584 +0000 UTC m=+151.787962780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.774795 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.798943 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9k9dq"] Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.799204 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hthrz"] Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.861428 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a605f814-0bb0-4f00-88c9-57a7c47f7ece-utilities\") pod \"certified-operators-hthrz\" (UID: \"a605f814-0bb0-4f00-88c9-57a7c47f7ece\") " pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.862175 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a605f814-0bb0-4f00-88c9-57a7c47f7ece-catalog-content\") pod \"certified-operators-hthrz\" (UID: \"a605f814-0bb0-4f00-88c9-57a7c47f7ece\") " pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.862592 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.862686 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggk9r\" (UniqueName: \"kubernetes.io/projected/a605f814-0bb0-4f00-88c9-57a7c47f7ece-kube-api-access-ggk9r\") pod \"certified-operators-hthrz\" (UID: \"a605f814-0bb0-4f00-88c9-57a7c47f7ece\") " pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.862521 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a605f814-0bb0-4f00-88c9-57a7c47f7ece-catalog-content\") pod \"certified-operators-hthrz\" (UID: \"a605f814-0bb0-4f00-88c9-57a7c47f7ece\") " pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.862136 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a605f814-0bb0-4f00-88c9-57a7c47f7ece-utilities\") pod \"certified-operators-hthrz\" (UID: \"a605f814-0bb0-4f00-88c9-57a7c47f7ece\") " pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:56:14 crc kubenswrapper[4872]: E0127 06:56:14.863222 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:15.3632041 +0000 UTC m=+151.890679366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.937933 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.939661 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.956321 4872 patch_prober.go:28] interesting pod/apiserver-76f77b778f-jghd5 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 06:56:14 crc kubenswrapper[4872]: [+]log ok Jan 27 06:56:14 crc kubenswrapper[4872]: [+]etcd ok Jan 27 06:56:14 crc kubenswrapper[4872]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 06:56:14 crc kubenswrapper[4872]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 06:56:14 crc kubenswrapper[4872]: [+]poststarthook/max-in-flight-filter ok Jan 27 06:56:14 crc kubenswrapper[4872]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 06:56:14 crc kubenswrapper[4872]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 06:56:14 crc kubenswrapper[4872]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 06:56:14 crc kubenswrapper[4872]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 27 06:56:14 crc kubenswrapper[4872]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 06:56:14 crc kubenswrapper[4872]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 06:56:14 crc kubenswrapper[4872]: [+]poststarthook/openshift.io-startinformers ok Jan 27 06:56:14 crc kubenswrapper[4872]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 06:56:14 crc kubenswrapper[4872]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 06:56:14 crc kubenswrapper[4872]: livez check failed Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.957041 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-jghd5" podUID="a413fe38-46c6-4603-96cc-4667937fe849" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.963444 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:14 crc kubenswrapper[4872]: E0127 06:56:14.963957 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:15.46393989 +0000 UTC m=+151.991415076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.965763 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-854hx"] Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.966794 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-854hx" Jan 27 06:56:14 crc kubenswrapper[4872]: I0127 06:56:14.986488 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggk9r\" (UniqueName: \"kubernetes.io/projected/a605f814-0bb0-4f00-88c9-57a7c47f7ece-kube-api-access-ggk9r\") pod \"certified-operators-hthrz\" (UID: \"a605f814-0bb0-4f00-88c9-57a7c47f7ece\") " pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.026919 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-854hx"] Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.052696 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9ad6c1dd1b7310c28820fd87b3a54f0a53ecc9d2bdfedf5d3714d4d7e5b905f7"} Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.067135 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-catalog-content\") pod \"community-operators-854hx\" (UID: \"e29c05a4-c51b-4ef3-b341-fca8f5f68b53\") " pod="openshift-marketplace/community-operators-854hx" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.067195 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmkxr\" (UniqueName: \"kubernetes.io/projected/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-kube-api-access-qmkxr\") pod \"community-operators-854hx\" (UID: \"e29c05a4-c51b-4ef3-b341-fca8f5f68b53\") " pod="openshift-marketplace/community-operators-854hx" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.067227 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-utilities\") pod \"community-operators-854hx\" (UID: \"e29c05a4-c51b-4ef3-b341-fca8f5f68b53\") " pod="openshift-marketplace/community-operators-854hx" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.067248 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:15 crc kubenswrapper[4872]: E0127 06:56:15.069385 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:15.569373423 +0000 UTC m=+152.096848619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.088620 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" event={"ID":"b5ca7461-c53d-4337-b195-1a7b3e1a0d71","Type":"ContainerStarted","Data":"f98842f26a102a0b22e20e17d2e714b080a8521bf3e9b7ed94c9f48bce4eb55e"} Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.091546 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4463377cfc372c95b9835352a2a95f656ac2b3f2f29760ea0311710cb9f83a1c"} Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.091957 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.098001 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f8a757f4bf3cecd7d794c4d9e014b042c7be6557355fc2ba1c1f0fc6e710d09c"} Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.168607 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.168923 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-catalog-content\") pod \"community-operators-854hx\" (UID: \"e29c05a4-c51b-4ef3-b341-fca8f5f68b53\") " pod="openshift-marketplace/community-operators-854hx" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.168962 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmkxr\" (UniqueName: \"kubernetes.io/projected/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-kube-api-access-qmkxr\") pod \"community-operators-854hx\" (UID: \"e29c05a4-c51b-4ef3-b341-fca8f5f68b53\") " pod="openshift-marketplace/community-operators-854hx" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.168996 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-utilities\") pod \"community-operators-854hx\" (UID: \"e29c05a4-c51b-4ef3-b341-fca8f5f68b53\") " pod="openshift-marketplace/community-operators-854hx" Jan 27 06:56:15 crc kubenswrapper[4872]: E0127 06:56:15.169929 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:15.669909938 +0000 UTC m=+152.197385144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.172106 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-catalog-content\") pod \"community-operators-854hx\" (UID: \"e29c05a4-c51b-4ef3-b341-fca8f5f68b53\") " pod="openshift-marketplace/community-operators-854hx" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.172774 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-utilities\") pod \"community-operators-854hx\" (UID: \"e29c05a4-c51b-4ef3-b341-fca8f5f68b53\") " pod="openshift-marketplace/community-operators-854hx" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.269248 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmkxr\" (UniqueName: \"kubernetes.io/projected/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-kube-api-access-qmkxr\") pod \"community-operators-854hx\" (UID: \"e29c05a4-c51b-4ef3-b341-fca8f5f68b53\") " pod="openshift-marketplace/community-operators-854hx" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.270597 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:15 crc kubenswrapper[4872]: E0127 06:56:15.271337 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:15.771310357 +0000 UTC m=+152.298785553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.271816 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6k8gx" podStartSLOduration=13.271790472 podStartE2EDuration="13.271790472s" podCreationTimestamp="2026-01-27 06:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:15.215207956 +0000 UTC m=+151.742683152" watchObservedRunningTime="2026-01-27 06:56:15.271790472 +0000 UTC m=+151.799265668" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.304167 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-854hx" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.304224 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.380774 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:15 crc kubenswrapper[4872]: E0127 06:56:15.381219 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:15.881198867 +0000 UTC m=+152.408674063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.436088 4872 patch_prober.go:28] interesting pod/downloads-7954f5f757-6plnf container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.436155 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6plnf" podUID="fecd0f15-a29c-4508-af28-9169b2cf96b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.436454 4872 patch_prober.go:28] interesting pod/downloads-7954f5f757-6plnf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.436473 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6plnf" podUID="fecd0f15-a29c-4508-af28-9169b2cf96b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.482943 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.484718 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:15 crc kubenswrapper[4872]: E0127 06:56:15.485094 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:15.985077366 +0000 UTC m=+152.512552572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.496060 4872 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.496285 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.516104 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:15 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:15 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:15 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.516162 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.585461 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:15 crc kubenswrapper[4872]: E0127 06:56:15.586392 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:16.086370882 +0000 UTC m=+152.613846078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.687441 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:15 crc kubenswrapper[4872]: E0127 06:56:15.687760 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:16.187748761 +0000 UTC m=+152.715223957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.788604 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:15 crc kubenswrapper[4872]: E0127 06:56:15.788806 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:16.28878092 +0000 UTC m=+152.816256116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.788908 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:15 crc kubenswrapper[4872]: E0127 06:56:15.789250 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:16.289240513 +0000 UTC m=+152.816715709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.892282 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:15 crc kubenswrapper[4872]: E0127 06:56:15.892615 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:16.392598387 +0000 UTC m=+152.920073583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.942342 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mm92d"] Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.959119 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9k9dq"] Jan 27 06:56:15 crc kubenswrapper[4872]: I0127 06:56:15.996773 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:15 crc kubenswrapper[4872]: E0127 06:56:15.997105 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:16.497093724 +0000 UTC m=+153.024568920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.038591 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.100630 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:16 crc kubenswrapper[4872]: E0127 06:56:16.101888 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:16.601870839 +0000 UTC m=+153.129346035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.141483 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm92d" event={"ID":"7e6a557b-5fc3-46e7-9799-1b75567c2dc6","Type":"ContainerStarted","Data":"be8e41d8e961be6c1fe6247b1394cb087cafdd280c8eece962eac81c3dba3e0a"} Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.153674 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k9dq" event={"ID":"22382085-00d5-42cd-97fb-098b131498d6","Type":"ContainerStarted","Data":"061cd47d145e63a89b484d6b8545726e6cf496ca2c7ed6c65c0aa24fabbf2ba9"} Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.193740 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.203111 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:16 crc kubenswrapper[4872]: E0127 06:56:16.204992 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:16.704971206 +0000 UTC m=+153.232446402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.221173 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pp8h2"] Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.222262 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.238211 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.243135 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-76vrz" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.296910 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pp8h2"] Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.303972 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.304636 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-catalog-content\") pod \"redhat-marketplace-pp8h2\" (UID: \"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b\") " pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.304751 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-utilities\") pod \"redhat-marketplace-pp8h2\" (UID: \"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b\") " pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.304779 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcc5t\" (UniqueName: \"kubernetes.io/projected/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-kube-api-access-vcc5t\") pod \"redhat-marketplace-pp8h2\" (UID: \"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b\") " pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:56:16 crc kubenswrapper[4872]: E0127 06:56:16.304958 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:16.804938435 +0000 UTC m=+153.332413631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.329495 4872 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T06:56:15.496085846Z","Handler":null,"Name":""} Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.406643 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-catalog-content\") pod \"redhat-marketplace-pp8h2\" (UID: \"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b\") " pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.406957 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.407059 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-utilities\") pod \"redhat-marketplace-pp8h2\" (UID: \"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b\") " pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.407142 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcc5t\" (UniqueName: \"kubernetes.io/projected/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-kube-api-access-vcc5t\") pod \"redhat-marketplace-pp8h2\" (UID: \"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b\") " pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.407378 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-catalog-content\") pod \"redhat-marketplace-pp8h2\" (UID: \"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b\") " pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:56:16 crc kubenswrapper[4872]: E0127 06:56:16.407661 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:56:16.907648571 +0000 UTC m=+153.435123767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rthkh" (UID: "c17752d4-ad64-445d-882f-134f79928b40") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.408113 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-utilities\") pod \"redhat-marketplace-pp8h2\" (UID: \"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b\") " pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.493025 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcc5t\" (UniqueName: \"kubernetes.io/projected/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-kube-api-access-vcc5t\") pod \"redhat-marketplace-pp8h2\" (UID: \"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b\") " pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.503300 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:16 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:16 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:16 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.503355 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.509612 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:16 crc kubenswrapper[4872]: E0127 06:56:16.509994 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:56:17.009970586 +0000 UTC m=+153.537445792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.575265 4872 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.575338 4872 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.577731 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.610648 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.624366 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cvk24"] Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.625420 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.636979 4872 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.637052 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.688274 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvk24"] Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.715504 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-utilities\") pod \"redhat-marketplace-cvk24\" (UID: \"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1\") " pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.715984 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-catalog-content\") pod \"redhat-marketplace-cvk24\" (UID: \"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1\") " pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.716041 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kgp6\" (UniqueName: \"kubernetes.io/projected/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-kube-api-access-2kgp6\") pod \"redhat-marketplace-cvk24\" (UID: \"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1\") " pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.729130 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hthrz"] Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.817255 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kgp6\" (UniqueName: \"kubernetes.io/projected/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-kube-api-access-2kgp6\") pod \"redhat-marketplace-cvk24\" (UID: \"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1\") " pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.817287 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-utilities\") pod \"redhat-marketplace-cvk24\" (UID: \"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1\") " pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.817361 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-catalog-content\") pod \"redhat-marketplace-cvk24\" (UID: \"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1\") " pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.817892 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-catalog-content\") pod \"redhat-marketplace-cvk24\" (UID: \"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1\") " pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.817985 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-utilities\") pod \"redhat-marketplace-cvk24\" (UID: \"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1\") " pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.859282 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kgp6\" (UniqueName: \"kubernetes.io/projected/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-kube-api-access-2kgp6\") pod \"redhat-marketplace-cvk24\" (UID: \"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1\") " pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.983264 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:56:16 crc kubenswrapper[4872]: I0127 06:56:16.995081 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-854hx"] Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.106637 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rthkh\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.133185 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.161227 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-854hx" event={"ID":"e29c05a4-c51b-4ef3-b341-fca8f5f68b53","Type":"ContainerStarted","Data":"7e707819e52f33703161021d94926136719b57518f1965853cfb13d8c18e21dc"} Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.162812 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthrz" event={"ID":"a605f814-0bb0-4f00-88c9-57a7c47f7ece","Type":"ContainerStarted","Data":"a28d382974c113924e6991f5ef06bd8982d7c525f1cfccb4b42e3776c1766c6c"} Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.162909 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthrz" event={"ID":"a605f814-0bb0-4f00-88c9-57a7c47f7ece","Type":"ContainerStarted","Data":"26ebcb09c7bcf30d6dfa858c61e2b9a064bc262c29e6d7b4ec95119b6fc44d09"} Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.170132 4872 generic.go:334] "Generic (PLEG): container finished" podID="7e6a557b-5fc3-46e7-9799-1b75567c2dc6" containerID="3ccf7957f37f10bfdc689568a034d90cb3db8648df22e5e971c39d824550ca90" exitCode=0 Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.170278 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm92d" event={"ID":"7e6a557b-5fc3-46e7-9799-1b75567c2dc6","Type":"ContainerDied","Data":"3ccf7957f37f10bfdc689568a034d90cb3db8648df22e5e971c39d824550ca90"} Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.175125 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.177375 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.180331 4872 generic.go:334] "Generic (PLEG): container finished" podID="22382085-00d5-42cd-97fb-098b131498d6" containerID="6cd5a4e3c9a141c32577105f0398b99c7cc4095819e87dd7acf4d2bd5f7882ea" exitCode=0 Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.180399 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k9dq" event={"ID":"22382085-00d5-42cd-97fb-098b131498d6","Type":"ContainerDied","Data":"6cd5a4e3c9a141c32577105f0398b99c7cc4095819e87dd7acf4d2bd5f7882ea"} Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.502189 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:17 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:17 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:17 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.502750 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.575634 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.584639 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pp8h2"] Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.642400 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.643111 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:56:17 crc kubenswrapper[4872]: W0127 06:56:17.658234 4872 reflector.go:561] object-"openshift-kube-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-apiserver": no relationship found between node 'crc' and this object Jan 27 06:56:17 crc kubenswrapper[4872]: E0127 06:56:17.658281 4872 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 06:56:17 crc kubenswrapper[4872]: W0127 06:56:17.658354 4872 reflector.go:561] object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n": failed to list *v1.Secret: secrets "installer-sa-dockercfg-5pr6n" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-apiserver": no relationship found between node 'crc' and this object Jan 27 06:56:17 crc kubenswrapper[4872]: E0127 06:56:17.658391 4872 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-5pr6n\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"installer-sa-dockercfg-5pr6n\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.664888 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8r2pn"] Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.666177 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.676003 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.764163 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ef5e011-1671-4c35-968b-e74d5f43deaf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0ef5e011-1671-4c35-968b-e74d5f43deaf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.764232 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmpt9\" (UniqueName: \"kubernetes.io/projected/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-kube-api-access-tmpt9\") pod \"redhat-operators-8r2pn\" (UID: \"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9\") " pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.764254 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ef5e011-1671-4c35-968b-e74d5f43deaf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0ef5e011-1671-4c35-968b-e74d5f43deaf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.764279 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-utilities\") pod \"redhat-operators-8r2pn\" (UID: \"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9\") " pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.764306 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-catalog-content\") pod \"redhat-operators-8r2pn\" (UID: \"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9\") " pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.772410 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8r2pn"] Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.804580 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.865371 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-catalog-content\") pod \"redhat-operators-8r2pn\" (UID: \"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9\") " pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.865827 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ef5e011-1671-4c35-968b-e74d5f43deaf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0ef5e011-1671-4c35-968b-e74d5f43deaf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.865874 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmpt9\" (UniqueName: \"kubernetes.io/projected/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-kube-api-access-tmpt9\") pod \"redhat-operators-8r2pn\" (UID: \"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9\") " pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.865892 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ef5e011-1671-4c35-968b-e74d5f43deaf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0ef5e011-1671-4c35-968b-e74d5f43deaf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.865939 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-utilities\") pod \"redhat-operators-8r2pn\" (UID: \"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9\") " pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.866312 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-utilities\") pod \"redhat-operators-8r2pn\" (UID: \"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9\") " pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.867180 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-catalog-content\") pod \"redhat-operators-8r2pn\" (UID: \"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9\") " pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.867558 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ef5e011-1671-4c35-968b-e74d5f43deaf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0ef5e011-1671-4c35-968b-e74d5f43deaf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.919964 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmpt9\" (UniqueName: \"kubernetes.io/projected/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-kube-api-access-tmpt9\") pod \"redhat-operators-8r2pn\" (UID: \"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9\") " pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:56:17 crc kubenswrapper[4872]: I0127 06:56:17.999685 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.014072 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tltfz"] Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.015598 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.030738 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvk24"] Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.046407 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tltfz"] Jan 27 06:56:18 crc kubenswrapper[4872]: W0127 06:56:18.051964 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbce5e17_ac6e_43a9_b73a_9a6a50fee3e1.slice/crio-85fde09e42885abc835568bae0b47c9ef600217534d1c56b2261e6e5a17791ee WatchSource:0}: Error finding container 85fde09e42885abc835568bae0b47c9ef600217534d1c56b2261e6e5a17791ee: Status 404 returned error can't find the container with id 85fde09e42885abc835568bae0b47c9ef600217534d1c56b2261e6e5a17791ee Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.081677 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-catalog-content\") pod \"redhat-operators-tltfz\" (UID: \"02c45bcc-19a3-41d7-b7dc-b954a7f2301a\") " pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.081733 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-utilities\") pod \"redhat-operators-tltfz\" (UID: \"02c45bcc-19a3-41d7-b7dc-b954a7f2301a\") " pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.081784 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvx77\" (UniqueName: \"kubernetes.io/projected/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-kube-api-access-zvx77\") pod \"redhat-operators-tltfz\" (UID: \"02c45bcc-19a3-41d7-b7dc-b954a7f2301a\") " pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.134010 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.185312 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-catalog-content\") pod \"redhat-operators-tltfz\" (UID: \"02c45bcc-19a3-41d7-b7dc-b954a7f2301a\") " pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.185373 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-utilities\") pod \"redhat-operators-tltfz\" (UID: \"02c45bcc-19a3-41d7-b7dc-b954a7f2301a\") " pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.185429 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvx77\" (UniqueName: \"kubernetes.io/projected/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-kube-api-access-zvx77\") pod \"redhat-operators-tltfz\" (UID: \"02c45bcc-19a3-41d7-b7dc-b954a7f2301a\") " pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.186764 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-catalog-content\") pod \"redhat-operators-tltfz\" (UID: \"02c45bcc-19a3-41d7-b7dc-b954a7f2301a\") " pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.187104 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-utilities\") pod \"redhat-operators-tltfz\" (UID: \"02c45bcc-19a3-41d7-b7dc-b954a7f2301a\") " pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.188796 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.190711 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.199123 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.199499 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rthkh"] Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.202360 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.217193 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.241130 4872 generic.go:334] "Generic (PLEG): container finished" podID="59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" containerID="4aa2f49072a3eaa9d258f84ab48a0273b5d8b99e5f819a9a55569b06874810ff" exitCode=0 Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.241222 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pp8h2" event={"ID":"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b","Type":"ContainerDied","Data":"4aa2f49072a3eaa9d258f84ab48a0273b5d8b99e5f819a9a55569b06874810ff"} Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.241249 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pp8h2" event={"ID":"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b","Type":"ContainerStarted","Data":"1a88007679c4ab55d2b11af03c17ffb9a918bf4c7c43b716a2049b1159321378"} Jan 27 06:56:18 crc kubenswrapper[4872]: W0127 06:56:18.249735 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc17752d4_ad64_445d_882f_134f79928b40.slice/crio-f38c900663ef2977346e360d787b7e3b7476c27709c1f92a97cfe0985958ba75 WatchSource:0}: Error finding container f38c900663ef2977346e360d787b7e3b7476c27709c1f92a97cfe0985958ba75: Status 404 returned error can't find the container with id f38c900663ef2977346e360d787b7e3b7476c27709c1f92a97cfe0985958ba75 Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.258469 4872 generic.go:334] "Generic (PLEG): container finished" podID="e29c05a4-c51b-4ef3-b341-fca8f5f68b53" containerID="989b325d42f01ab40374e6cafa943a164d13c1907c18dd4e19464ab51e0788b0" exitCode=0 Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.258573 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-854hx" event={"ID":"e29c05a4-c51b-4ef3-b341-fca8f5f68b53","Type":"ContainerDied","Data":"989b325d42f01ab40374e6cafa943a164d13c1907c18dd4e19464ab51e0788b0"} Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.261966 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvk24" event={"ID":"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1","Type":"ContainerStarted","Data":"85fde09e42885abc835568bae0b47c9ef600217534d1c56b2261e6e5a17791ee"} Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.271201 4872 generic.go:334] "Generic (PLEG): container finished" podID="a605f814-0bb0-4f00-88c9-57a7c47f7ece" containerID="a28d382974c113924e6991f5ef06bd8982d7c525f1cfccb4b42e3776c1766c6c" exitCode=0 Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.271257 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthrz" event={"ID":"a605f814-0bb0-4f00-88c9-57a7c47f7ece","Type":"ContainerDied","Data":"a28d382974c113924e6991f5ef06bd8982d7c525f1cfccb4b42e3776c1766c6c"} Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.294006 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecc01578-0e24-429b-9a33-908ecd2ed349-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ecc01578-0e24-429b-9a33-908ecd2ed349\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.294185 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecc01578-0e24-429b-9a33-908ecd2ed349-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ecc01578-0e24-429b-9a33-908ecd2ed349\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.310188 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvx77\" (UniqueName: \"kubernetes.io/projected/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-kube-api-access-zvx77\") pod \"redhat-operators-tltfz\" (UID: \"02c45bcc-19a3-41d7-b7dc-b954a7f2301a\") " pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.381525 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.403141 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecc01578-0e24-429b-9a33-908ecd2ed349-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ecc01578-0e24-429b-9a33-908ecd2ed349\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.403291 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecc01578-0e24-429b-9a33-908ecd2ed349-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ecc01578-0e24-429b-9a33-908ecd2ed349\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.403393 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecc01578-0e24-429b-9a33-908ecd2ed349-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ecc01578-0e24-429b-9a33-908ecd2ed349\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.464168 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecc01578-0e24-429b-9a33-908ecd2ed349-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ecc01578-0e24-429b-9a33-908ecd2ed349\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.519932 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:18 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:18 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:18 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.519997 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.610095 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.822164 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8r2pn"] Jan 27 06:56:18 crc kubenswrapper[4872]: E0127 06:56:18.904379 4872 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 06:56:18 crc kubenswrapper[4872]: E0127 06:56:18.904420 4872 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/revision-pruner-8-crc: failed to sync configmap cache: timed out waiting for the condition Jan 27 06:56:18 crc kubenswrapper[4872]: E0127 06:56:18.904476 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ef5e011-1671-4c35-968b-e74d5f43deaf-kube-api-access podName:0ef5e011-1671-4c35-968b-e74d5f43deaf nodeName:}" failed. No retries permitted until 2026-01-27 06:56:19.404458038 +0000 UTC m=+155.931933234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/0ef5e011-1671-4c35-968b-e74d5f43deaf-kube-api-access") pod "revision-pruner-8-crc" (UID: "0ef5e011-1671-4c35-968b-e74d5f43deaf") : failed to sync configmap cache: timed out waiting for the condition Jan 27 06:56:18 crc kubenswrapper[4872]: I0127 06:56:18.979077 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.143338 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tltfz"] Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.196211 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.309411 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" event={"ID":"c17752d4-ad64-445d-882f-134f79928b40","Type":"ContainerStarted","Data":"644b4111cf88d869ba9f84f85d675c0daa97575f19c0f62ba45cc0fc41ecb3df"} Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.309504 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" event={"ID":"c17752d4-ad64-445d-882f-134f79928b40","Type":"ContainerStarted","Data":"f38c900663ef2977346e360d787b7e3b7476c27709c1f92a97cfe0985958ba75"} Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.309600 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.346688 4872 generic.go:334] "Generic (PLEG): container finished" podID="cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" containerID="64b83408cdd6d78bccd848f3e78439f78f2230c0d8f94b6b5e33b5caf3790085" exitCode=0 Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.347250 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvk24" event={"ID":"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1","Type":"ContainerDied","Data":"64b83408cdd6d78bccd848f3e78439f78f2230c0d8f94b6b5e33b5caf3790085"} Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.393926 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tltfz" event={"ID":"02c45bcc-19a3-41d7-b7dc-b954a7f2301a","Type":"ContainerStarted","Data":"e66fe3ab72d2a368cc7251e78c99ca5eee1ef7ad94963a5a1e0ce27f7b6b80a3"} Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.399175 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.401676 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r2pn" event={"ID":"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9","Type":"ContainerStarted","Data":"e7d5a61efb776a101cb7fd0503189524fdf2582fb9dbba30afe5cfaabe7c4e93"} Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.401730 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r2pn" event={"ID":"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9","Type":"ContainerStarted","Data":"e740d8c38db9569764f44b641336932c9cbb17c42c392a7d737c52cb8d70fb08"} Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.412582 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" podStartSLOduration=135.412514635 podStartE2EDuration="2m15.412514635s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:19.360066257 +0000 UTC m=+155.887541473" watchObservedRunningTime="2026-01-27 06:56:19.412514635 +0000 UTC m=+155.939989831" Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.450749 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ef5e011-1671-4c35-968b-e74d5f43deaf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0ef5e011-1671-4c35-968b-e74d5f43deaf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.461429 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ef5e011-1671-4c35-968b-e74d5f43deaf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0ef5e011-1671-4c35-968b-e74d5f43deaf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.482406 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.505419 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:19 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:19 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:19 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.505497 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.899392 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.943601 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:19 crc kubenswrapper[4872]: I0127 06:56:19.955518 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jghd5" Jan 27 06:56:20 crc kubenswrapper[4872]: I0127 06:56:20.483391 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0ef5e011-1671-4c35-968b-e74d5f43deaf","Type":"ContainerStarted","Data":"6f738a1efa6de6ee6bf892a7d709e36d0666e83f32b3715a15c35458fc08f60a"} Jan 27 06:56:20 crc kubenswrapper[4872]: I0127 06:56:20.500339 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ecc01578-0e24-429b-9a33-908ecd2ed349","Type":"ContainerStarted","Data":"feb563babecc1b46d037ea9ee30a8aa0af8d5f2de2c18445130bb948c9bc9f8e"} Jan 27 06:56:20 crc kubenswrapper[4872]: I0127 06:56:20.501412 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:20 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:20 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:20 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:20 crc kubenswrapper[4872]: I0127 06:56:20.501460 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:20 crc kubenswrapper[4872]: I0127 06:56:20.503219 4872 generic.go:334] "Generic (PLEG): container finished" podID="374ad670-12b0-4842-a6ca-1cbf355b5a99" containerID="2e6275f843b31549a339b27a3ebc134ad73a09fa054de9d8ff92c4fa982ce685" exitCode=0 Jan 27 06:56:20 crc kubenswrapper[4872]: I0127 06:56:20.503263 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" event={"ID":"374ad670-12b0-4842-a6ca-1cbf355b5a99","Type":"ContainerDied","Data":"2e6275f843b31549a339b27a3ebc134ad73a09fa054de9d8ff92c4fa982ce685"} Jan 27 06:56:20 crc kubenswrapper[4872]: I0127 06:56:20.514600 4872 generic.go:334] "Generic (PLEG): container finished" podID="02c45bcc-19a3-41d7-b7dc-b954a7f2301a" containerID="ade90521d44f60555e21d0b600792104e4c5db2d1aae1877d3376cac7367c709" exitCode=0 Jan 27 06:56:20 crc kubenswrapper[4872]: I0127 06:56:20.514705 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tltfz" event={"ID":"02c45bcc-19a3-41d7-b7dc-b954a7f2301a","Type":"ContainerDied","Data":"ade90521d44f60555e21d0b600792104e4c5db2d1aae1877d3376cac7367c709"} Jan 27 06:56:20 crc kubenswrapper[4872]: I0127 06:56:20.548106 4872 generic.go:334] "Generic (PLEG): container finished" podID="d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" containerID="e7d5a61efb776a101cb7fd0503189524fdf2582fb9dbba30afe5cfaabe7c4e93" exitCode=0 Jan 27 06:56:20 crc kubenswrapper[4872]: I0127 06:56:20.548327 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r2pn" event={"ID":"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9","Type":"ContainerDied","Data":"e7d5a61efb776a101cb7fd0503189524fdf2582fb9dbba30afe5cfaabe7c4e93"} Jan 27 06:56:21 crc kubenswrapper[4872]: I0127 06:56:21.004095 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-n6d8q" Jan 27 06:56:21 crc kubenswrapper[4872]: I0127 06:56:21.496373 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:21 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:21 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:21 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:21 crc kubenswrapper[4872]: I0127 06:56:21.496811 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:21 crc kubenswrapper[4872]: I0127 06:56:21.630898 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0ef5e011-1671-4c35-968b-e74d5f43deaf","Type":"ContainerStarted","Data":"552f600b611a31df8091e6dfa8babb820c34cb15acfb080e3a6990fe9feae788"} Jan 27 06:56:21 crc kubenswrapper[4872]: I0127 06:56:21.655351 4872 generic.go:334] "Generic (PLEG): container finished" podID="ecc01578-0e24-429b-9a33-908ecd2ed349" containerID="d7e5955039ae113ed0e9e9f937c5b59c68359ee5bd205548a580eaa4152f522f" exitCode=0 Jan 27 06:56:21 crc kubenswrapper[4872]: I0127 06:56:21.655901 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ecc01578-0e24-429b-9a33-908ecd2ed349","Type":"ContainerDied","Data":"d7e5955039ae113ed0e9e9f937c5b59c68359ee5bd205548a580eaa4152f522f"} Jan 27 06:56:21 crc kubenswrapper[4872]: I0127 06:56:21.678560 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.677958348 podStartE2EDuration="4.677958348s" podCreationTimestamp="2026-01-27 06:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:21.648732404 +0000 UTC m=+158.176207600" watchObservedRunningTime="2026-01-27 06:56:21.677958348 +0000 UTC m=+158.205433544" Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.231409 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.339547 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw2d5\" (UniqueName: \"kubernetes.io/projected/374ad670-12b0-4842-a6ca-1cbf355b5a99-kube-api-access-qw2d5\") pod \"374ad670-12b0-4842-a6ca-1cbf355b5a99\" (UID: \"374ad670-12b0-4842-a6ca-1cbf355b5a99\") " Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.339664 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/374ad670-12b0-4842-a6ca-1cbf355b5a99-secret-volume\") pod \"374ad670-12b0-4842-a6ca-1cbf355b5a99\" (UID: \"374ad670-12b0-4842-a6ca-1cbf355b5a99\") " Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.339720 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/374ad670-12b0-4842-a6ca-1cbf355b5a99-config-volume\") pod \"374ad670-12b0-4842-a6ca-1cbf355b5a99\" (UID: \"374ad670-12b0-4842-a6ca-1cbf355b5a99\") " Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.340671 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/374ad670-12b0-4842-a6ca-1cbf355b5a99-config-volume" (OuterVolumeSpecName: "config-volume") pod "374ad670-12b0-4842-a6ca-1cbf355b5a99" (UID: "374ad670-12b0-4842-a6ca-1cbf355b5a99"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.361436 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/374ad670-12b0-4842-a6ca-1cbf355b5a99-kube-api-access-qw2d5" (OuterVolumeSpecName: "kube-api-access-qw2d5") pod "374ad670-12b0-4842-a6ca-1cbf355b5a99" (UID: "374ad670-12b0-4842-a6ca-1cbf355b5a99"). InnerVolumeSpecName "kube-api-access-qw2d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.364952 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374ad670-12b0-4842-a6ca-1cbf355b5a99-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "374ad670-12b0-4842-a6ca-1cbf355b5a99" (UID: "374ad670-12b0-4842-a6ca-1cbf355b5a99"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.441445 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw2d5\" (UniqueName: \"kubernetes.io/projected/374ad670-12b0-4842-a6ca-1cbf355b5a99-kube-api-access-qw2d5\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.441869 4872 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/374ad670-12b0-4842-a6ca-1cbf355b5a99-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.441881 4872 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/374ad670-12b0-4842-a6ca-1cbf355b5a99-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.497058 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:22 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:22 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:22 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.497113 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.728344 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.728337 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-gwlv7" event={"ID":"374ad670-12b0-4842-a6ca-1cbf355b5a99","Type":"ContainerDied","Data":"dcffe5d606a0d4e1a55a6805801999c2e77db922966453fc76f6b3ed8834ea72"} Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.729097 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcffe5d606a0d4e1a55a6805801999c2e77db922966453fc76f6b3ed8834ea72" Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.758221 4872 generic.go:334] "Generic (PLEG): container finished" podID="0ef5e011-1671-4c35-968b-e74d5f43deaf" containerID="552f600b611a31df8091e6dfa8babb820c34cb15acfb080e3a6990fe9feae788" exitCode=0 Jan 27 06:56:22 crc kubenswrapper[4872]: I0127 06:56:22.759228 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0ef5e011-1671-4c35-968b-e74d5f43deaf","Type":"ContainerDied","Data":"552f600b611a31df8091e6dfa8babb820c34cb15acfb080e3a6990fe9feae788"} Jan 27 06:56:23 crc kubenswrapper[4872]: I0127 06:56:23.498474 4872 patch_prober.go:28] interesting pod/router-default-5444994796-gdn9v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:56:23 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Jan 27 06:56:23 crc kubenswrapper[4872]: [+]process-running ok Jan 27 06:56:23 crc kubenswrapper[4872]: healthz check failed Jan 27 06:56:23 crc kubenswrapper[4872]: I0127 06:56:23.498528 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gdn9v" podUID="fe5c29a5-1e27-4a9c-8050-ee9c10d2d595" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:56:23 crc kubenswrapper[4872]: I0127 06:56:23.664906 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:56:23 crc kubenswrapper[4872]: I0127 06:56:23.769616 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecc01578-0e24-429b-9a33-908ecd2ed349-kubelet-dir\") pod \"ecc01578-0e24-429b-9a33-908ecd2ed349\" (UID: \"ecc01578-0e24-429b-9a33-908ecd2ed349\") " Jan 27 06:56:23 crc kubenswrapper[4872]: I0127 06:56:23.770085 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecc01578-0e24-429b-9a33-908ecd2ed349-kube-api-access\") pod \"ecc01578-0e24-429b-9a33-908ecd2ed349\" (UID: \"ecc01578-0e24-429b-9a33-908ecd2ed349\") " Jan 27 06:56:23 crc kubenswrapper[4872]: I0127 06:56:23.771079 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecc01578-0e24-429b-9a33-908ecd2ed349-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ecc01578-0e24-429b-9a33-908ecd2ed349" (UID: "ecc01578-0e24-429b-9a33-908ecd2ed349"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:56:23 crc kubenswrapper[4872]: I0127 06:56:23.780749 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:56:23 crc kubenswrapper[4872]: I0127 06:56:23.781089 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ecc01578-0e24-429b-9a33-908ecd2ed349","Type":"ContainerDied","Data":"feb563babecc1b46d037ea9ee30a8aa0af8d5f2de2c18445130bb948c9bc9f8e"} Jan 27 06:56:23 crc kubenswrapper[4872]: I0127 06:56:23.781121 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feb563babecc1b46d037ea9ee30a8aa0af8d5f2de2c18445130bb948c9bc9f8e" Jan 27 06:56:23 crc kubenswrapper[4872]: I0127 06:56:23.794154 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc01578-0e24-429b-9a33-908ecd2ed349-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ecc01578-0e24-429b-9a33-908ecd2ed349" (UID: "ecc01578-0e24-429b-9a33-908ecd2ed349"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:56:23 crc kubenswrapper[4872]: I0127 06:56:23.871920 4872 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecc01578-0e24-429b-9a33-908ecd2ed349-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:23 crc kubenswrapper[4872]: I0127 06:56:23.871969 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecc01578-0e24-429b-9a33-908ecd2ed349-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:24 crc kubenswrapper[4872]: I0127 06:56:24.173817 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:56:24 crc kubenswrapper[4872]: I0127 06:56:24.278415 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ef5e011-1671-4c35-968b-e74d5f43deaf-kubelet-dir\") pod \"0ef5e011-1671-4c35-968b-e74d5f43deaf\" (UID: \"0ef5e011-1671-4c35-968b-e74d5f43deaf\") " Jan 27 06:56:24 crc kubenswrapper[4872]: I0127 06:56:24.278568 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ef5e011-1671-4c35-968b-e74d5f43deaf-kube-api-access\") pod \"0ef5e011-1671-4c35-968b-e74d5f43deaf\" (UID: \"0ef5e011-1671-4c35-968b-e74d5f43deaf\") " Jan 27 06:56:24 crc kubenswrapper[4872]: I0127 06:56:24.279176 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ef5e011-1671-4c35-968b-e74d5f43deaf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0ef5e011-1671-4c35-968b-e74d5f43deaf" (UID: "0ef5e011-1671-4c35-968b-e74d5f43deaf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:56:24 crc kubenswrapper[4872]: I0127 06:56:24.284970 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef5e011-1671-4c35-968b-e74d5f43deaf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0ef5e011-1671-4c35-968b-e74d5f43deaf" (UID: "0ef5e011-1671-4c35-968b-e74d5f43deaf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:56:24 crc kubenswrapper[4872]: I0127 06:56:24.380422 4872 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ef5e011-1671-4c35-968b-e74d5f43deaf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:24 crc kubenswrapper[4872]: I0127 06:56:24.380471 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ef5e011-1671-4c35-968b-e74d5f43deaf-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:24 crc kubenswrapper[4872]: I0127 06:56:24.496591 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:24 crc kubenswrapper[4872]: I0127 06:56:24.500020 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-gdn9v" Jan 27 06:56:24 crc kubenswrapper[4872]: I0127 06:56:24.620776 4872 patch_prober.go:28] interesting pod/console-f9d7485db-ntnst container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.5:8443/health\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 06:56:24 crc kubenswrapper[4872]: I0127 06:56:24.620852 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ntnst" podUID="8cfa7f72-9e39-485f-894a-276893a688e1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.5:8443/health\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 06:56:24 crc kubenswrapper[4872]: I0127 06:56:24.823766 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:56:24 crc kubenswrapper[4872]: I0127 06:56:24.824106 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0ef5e011-1671-4c35-968b-e74d5f43deaf","Type":"ContainerDied","Data":"6f738a1efa6de6ee6bf892a7d709e36d0666e83f32b3715a15c35458fc08f60a"} Jan 27 06:56:24 crc kubenswrapper[4872]: I0127 06:56:24.824171 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f738a1efa6de6ee6bf892a7d709e36d0666e83f32b3715a15c35458fc08f60a" Jan 27 06:56:25 crc kubenswrapper[4872]: I0127 06:56:25.001516 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:56:25 crc kubenswrapper[4872]: I0127 06:56:25.002789 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:56:25 crc kubenswrapper[4872]: I0127 06:56:25.429348 4872 patch_prober.go:28] interesting pod/downloads-7954f5f757-6plnf container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 06:56:25 crc kubenswrapper[4872]: I0127 06:56:25.429407 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6plnf" podUID="fecd0f15-a29c-4508-af28-9169b2cf96b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 06:56:25 crc kubenswrapper[4872]: I0127 06:56:25.429354 4872 patch_prober.go:28] interesting pod/downloads-7954f5f757-6plnf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 27 06:56:25 crc kubenswrapper[4872]: I0127 06:56:25.429509 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6plnf" podUID="fecd0f15-a29c-4508-af28-9169b2cf96b7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 27 06:56:26 crc kubenswrapper[4872]: I0127 06:56:26.732193 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs\") pod \"network-metrics-daemon-nstjz\" (UID: \"f22e033f-46c7-4d57-a333-e1eee5cd3091\") " pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:56:26 crc kubenswrapper[4872]: I0127 06:56:26.767357 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f22e033f-46c7-4d57-a333-e1eee5cd3091-metrics-certs\") pod \"network-metrics-daemon-nstjz\" (UID: \"f22e033f-46c7-4d57-a333-e1eee5cd3091\") " pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:56:26 crc kubenswrapper[4872]: I0127 06:56:26.836129 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nstjz" Jan 27 06:56:27 crc kubenswrapper[4872]: I0127 06:56:27.423401 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nstjz"] Jan 27 06:56:27 crc kubenswrapper[4872]: I0127 06:56:27.896011 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nstjz" event={"ID":"f22e033f-46c7-4d57-a333-e1eee5cd3091","Type":"ContainerStarted","Data":"83ed52968ebe04696203903fbb61a2577b831c51f3986e65054539af4ef1edbc"} Jan 27 06:56:30 crc kubenswrapper[4872]: I0127 06:56:30.005300 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nstjz" event={"ID":"f22e033f-46c7-4d57-a333-e1eee5cd3091","Type":"ContainerStarted","Data":"f827b872dd0f9df6b7aa52cd4d3e3ca48e538ce0f9c6e8d530d01e1f2bd3d264"} Jan 27 06:56:31 crc kubenswrapper[4872]: I0127 06:56:31.036309 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nstjz" event={"ID":"f22e033f-46c7-4d57-a333-e1eee5cd3091","Type":"ContainerStarted","Data":"6f35a7cc5b8fecf809fc7e93f1877f8e9eec5e174813ac953ee4c36527650dad"} Jan 27 06:56:31 crc kubenswrapper[4872]: I0127 06:56:31.056810 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-nstjz" podStartSLOduration=147.05679129 podStartE2EDuration="2m27.05679129s" podCreationTimestamp="2026-01-27 06:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:31.052498869 +0000 UTC m=+167.579974065" watchObservedRunningTime="2026-01-27 06:56:31.05679129 +0000 UTC m=+167.584266486" Jan 27 06:56:31 crc kubenswrapper[4872]: I0127 06:56:31.465349 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mmpq8"] Jan 27 06:56:31 crc kubenswrapper[4872]: I0127 06:56:31.465695 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" podUID="f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4" containerName="controller-manager" containerID="cri-o://f386be220afaddddf5ea643f6e93ef1675a15111df565c4dd44d9c89a5525e23" gracePeriod=30 Jan 27 06:56:31 crc kubenswrapper[4872]: I0127 06:56:31.477198 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs"] Jan 27 06:56:31 crc kubenswrapper[4872]: I0127 06:56:31.477718 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" podUID="2ec6eaa6-0516-4dad-8481-ef8bf49935de" containerName="route-controller-manager" containerID="cri-o://42c8ce3ead921d5e7d0ee7485300d0ffd22dc566bb578f0b2f0d37d6eaec36a3" gracePeriod=30 Jan 27 06:56:32 crc kubenswrapper[4872]: I0127 06:56:32.054193 4872 generic.go:334] "Generic (PLEG): container finished" podID="2ec6eaa6-0516-4dad-8481-ef8bf49935de" containerID="42c8ce3ead921d5e7d0ee7485300d0ffd22dc566bb578f0b2f0d37d6eaec36a3" exitCode=0 Jan 27 06:56:32 crc kubenswrapper[4872]: I0127 06:56:32.054259 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" event={"ID":"2ec6eaa6-0516-4dad-8481-ef8bf49935de","Type":"ContainerDied","Data":"42c8ce3ead921d5e7d0ee7485300d0ffd22dc566bb578f0b2f0d37d6eaec36a3"} Jan 27 06:56:32 crc kubenswrapper[4872]: I0127 06:56:32.058745 4872 generic.go:334] "Generic (PLEG): container finished" podID="f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4" containerID="f386be220afaddddf5ea643f6e93ef1675a15111df565c4dd44d9c89a5525e23" exitCode=0 Jan 27 06:56:32 crc kubenswrapper[4872]: I0127 06:56:32.058857 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" event={"ID":"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4","Type":"ContainerDied","Data":"f386be220afaddddf5ea643f6e93ef1675a15111df565c4dd44d9c89a5525e23"} Jan 27 06:56:34 crc kubenswrapper[4872]: I0127 06:56:34.624481 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:34 crc kubenswrapper[4872]: I0127 06:56:34.629455 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-ntnst" Jan 27 06:56:35 crc kubenswrapper[4872]: I0127 06:56:35.369642 4872 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vs2xs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 27 06:56:35 crc kubenswrapper[4872]: I0127 06:56:35.369698 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" podUID="2ec6eaa6-0516-4dad-8481-ef8bf49935de" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 27 06:56:35 crc kubenswrapper[4872]: I0127 06:56:35.431304 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-6plnf" Jan 27 06:56:36 crc kubenswrapper[4872]: I0127 06:56:36.019335 4872 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mmpq8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 27 06:56:36 crc kubenswrapper[4872]: I0127 06:56:36.019421 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" podUID="f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 27 06:56:37 crc kubenswrapper[4872]: I0127 06:56:37.182250 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:56:44 crc kubenswrapper[4872]: I0127 06:56:44.861145 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:44 crc kubenswrapper[4872]: I0127 06:56:44.866325 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:44 crc kubenswrapper[4872]: I0127 06:56:44.893909 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-568c95bf5d-bc8ng"] Jan 27 06:56:44 crc kubenswrapper[4872]: E0127 06:56:44.894141 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4" containerName="controller-manager" Jan 27 06:56:44 crc kubenswrapper[4872]: I0127 06:56:44.894154 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4" containerName="controller-manager" Jan 27 06:56:44 crc kubenswrapper[4872]: E0127 06:56:44.894164 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc01578-0e24-429b-9a33-908ecd2ed349" containerName="pruner" Jan 27 06:56:44 crc kubenswrapper[4872]: I0127 06:56:44.894170 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc01578-0e24-429b-9a33-908ecd2ed349" containerName="pruner" Jan 27 06:56:44 crc kubenswrapper[4872]: E0127 06:56:44.894179 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="374ad670-12b0-4842-a6ca-1cbf355b5a99" containerName="collect-profiles" Jan 27 06:56:44 crc kubenswrapper[4872]: I0127 06:56:44.894185 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="374ad670-12b0-4842-a6ca-1cbf355b5a99" containerName="collect-profiles" Jan 27 06:56:44 crc kubenswrapper[4872]: E0127 06:56:44.894201 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef5e011-1671-4c35-968b-e74d5f43deaf" containerName="pruner" Jan 27 06:56:44 crc kubenswrapper[4872]: I0127 06:56:44.894208 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef5e011-1671-4c35-968b-e74d5f43deaf" containerName="pruner" Jan 27 06:56:44 crc kubenswrapper[4872]: E0127 06:56:44.894215 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ec6eaa6-0516-4dad-8481-ef8bf49935de" containerName="route-controller-manager" Jan 27 06:56:44 crc kubenswrapper[4872]: I0127 06:56:44.894222 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ec6eaa6-0516-4dad-8481-ef8bf49935de" containerName="route-controller-manager" Jan 27 06:56:44 crc kubenswrapper[4872]: I0127 06:56:44.894330 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="374ad670-12b0-4842-a6ca-1cbf355b5a99" containerName="collect-profiles" Jan 27 06:56:44 crc kubenswrapper[4872]: I0127 06:56:44.894342 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc01578-0e24-429b-9a33-908ecd2ed349" containerName="pruner" Jan 27 06:56:44 crc kubenswrapper[4872]: I0127 06:56:44.894352 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ec6eaa6-0516-4dad-8481-ef8bf49935de" containerName="route-controller-manager" Jan 27 06:56:44 crc kubenswrapper[4872]: I0127 06:56:44.894360 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4" containerName="controller-manager" Jan 27 06:56:44 crc kubenswrapper[4872]: I0127 06:56:44.894368 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef5e011-1671-4c35-968b-e74d5f43deaf" containerName="pruner" Jan 27 06:56:44 crc kubenswrapper[4872]: I0127 06:56:44.894714 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:44 crc kubenswrapper[4872]: I0127 06:56:44.909636 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-568c95bf5d-bc8ng"] Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.009120 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-proxy-ca-bundles\") pod \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.010089 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec6eaa6-0516-4dad-8481-ef8bf49935de-serving-cert\") pod \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\" (UID: \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\") " Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.010133 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4" (UID: "f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.010141 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2h8t\" (UniqueName: \"kubernetes.io/projected/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-kube-api-access-h2h8t\") pod \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.010200 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmjqp\" (UniqueName: \"kubernetes.io/projected/2ec6eaa6-0516-4dad-8481-ef8bf49935de-kube-api-access-rmjqp\") pod \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\" (UID: \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\") " Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.010228 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-serving-cert\") pod \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.010265 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ec6eaa6-0516-4dad-8481-ef8bf49935de-config\") pod \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\" (UID: \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\") " Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.010294 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-config\") pod \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.010331 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-client-ca\") pod \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\" (UID: \"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4\") " Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.010367 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ec6eaa6-0516-4dad-8481-ef8bf49935de-client-ca\") pod \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\" (UID: \"2ec6eaa6-0516-4dad-8481-ef8bf49935de\") " Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.010528 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-proxy-ca-bundles\") pod \"controller-manager-568c95bf5d-bc8ng\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.010563 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-client-ca\") pod \"controller-manager-568c95bf5d-bc8ng\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.010597 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-config\") pod \"controller-manager-568c95bf5d-bc8ng\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.010628 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39408865-e291-4366-891c-db94549bad1f-serving-cert\") pod \"controller-manager-568c95bf5d-bc8ng\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.010644 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5dk2\" (UniqueName: \"kubernetes.io/projected/39408865-e291-4366-891c-db94549bad1f-kube-api-access-p5dk2\") pod \"controller-manager-568c95bf5d-bc8ng\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.010686 4872 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.011259 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-client-ca" (OuterVolumeSpecName: "client-ca") pod "f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4" (UID: "f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.011317 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ec6eaa6-0516-4dad-8481-ef8bf49935de-client-ca" (OuterVolumeSpecName: "client-ca") pod "2ec6eaa6-0516-4dad-8481-ef8bf49935de" (UID: "2ec6eaa6-0516-4dad-8481-ef8bf49935de"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.011429 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-config" (OuterVolumeSpecName: "config") pod "f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4" (UID: "f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.011793 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ec6eaa6-0516-4dad-8481-ef8bf49935de-config" (OuterVolumeSpecName: "config") pod "2ec6eaa6-0516-4dad-8481-ef8bf49935de" (UID: "2ec6eaa6-0516-4dad-8481-ef8bf49935de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.015788 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-kube-api-access-h2h8t" (OuterVolumeSpecName: "kube-api-access-h2h8t") pod "f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4" (UID: "f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4"). InnerVolumeSpecName "kube-api-access-h2h8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.017224 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec6eaa6-0516-4dad-8481-ef8bf49935de-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2ec6eaa6-0516-4dad-8481-ef8bf49935de" (UID: "2ec6eaa6-0516-4dad-8481-ef8bf49935de"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.017765 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4" (UID: "f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.026154 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ec6eaa6-0516-4dad-8481-ef8bf49935de-kube-api-access-rmjqp" (OuterVolumeSpecName: "kube-api-access-rmjqp") pod "2ec6eaa6-0516-4dad-8481-ef8bf49935de" (UID: "2ec6eaa6-0516-4dad-8481-ef8bf49935de"). InnerVolumeSpecName "kube-api-access-rmjqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.111467 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-config\") pod \"controller-manager-568c95bf5d-bc8ng\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.111997 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39408865-e291-4366-891c-db94549bad1f-serving-cert\") pod \"controller-manager-568c95bf5d-bc8ng\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.112024 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5dk2\" (UniqueName: \"kubernetes.io/projected/39408865-e291-4366-891c-db94549bad1f-kube-api-access-p5dk2\") pod \"controller-manager-568c95bf5d-bc8ng\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.112099 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-proxy-ca-bundles\") pod \"controller-manager-568c95bf5d-bc8ng\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.112134 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-client-ca\") pod \"controller-manager-568c95bf5d-bc8ng\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.112191 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec6eaa6-0516-4dad-8481-ef8bf49935de-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.112224 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2h8t\" (UniqueName: \"kubernetes.io/projected/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-kube-api-access-h2h8t\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.112239 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmjqp\" (UniqueName: \"kubernetes.io/projected/2ec6eaa6-0516-4dad-8481-ef8bf49935de-kube-api-access-rmjqp\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.112253 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.112265 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ec6eaa6-0516-4dad-8481-ef8bf49935de-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.112276 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.112286 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.112296 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ec6eaa6-0516-4dad-8481-ef8bf49935de-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.113062 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-config\") pod \"controller-manager-568c95bf5d-bc8ng\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.113157 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-client-ca\") pod \"controller-manager-568c95bf5d-bc8ng\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.114705 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-proxy-ca-bundles\") pod \"controller-manager-568c95bf5d-bc8ng\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.125727 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39408865-e291-4366-891c-db94549bad1f-serving-cert\") pod \"controller-manager-568c95bf5d-bc8ng\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.137565 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5dk2\" (UniqueName: \"kubernetes.io/projected/39408865-e291-4366-891c-db94549bad1f-kube-api-access-p5dk2\") pod \"controller-manager-568c95bf5d-bc8ng\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.151987 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" event={"ID":"2ec6eaa6-0516-4dad-8481-ef8bf49935de","Type":"ContainerDied","Data":"d567fc4c3c638fc845f7593a7862b4438aaee5cfe5c159472081a788f87b1c43"} Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.152056 4872 scope.go:117] "RemoveContainer" containerID="42c8ce3ead921d5e7d0ee7485300d0ffd22dc566bb578f0b2f0d37d6eaec36a3" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.152206 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.156786 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" event={"ID":"f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4","Type":"ContainerDied","Data":"e4ee39c0b20e5721be6880348bf130aed828795bc0029be48cca1d44f2361c6c"} Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.156888 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mmpq8" Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.194792 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs"] Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.196658 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vs2xs"] Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.204310 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mmpq8"] Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.207073 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mmpq8"] Jan 27 06:56:45 crc kubenswrapper[4872]: I0127 06:56:45.220465 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:46 crc kubenswrapper[4872]: I0127 06:56:46.104599 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ec6eaa6-0516-4dad-8481-ef8bf49935de" path="/var/lib/kubelet/pods/2ec6eaa6-0516-4dad-8481-ef8bf49935de/volumes" Jan 27 06:56:46 crc kubenswrapper[4872]: I0127 06:56:46.105606 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4" path="/var/lib/kubelet/pods/f33eaaf8-cc5b-46a2-b0f9-dc7ac7023ab4/volumes" Jan 27 06:56:46 crc kubenswrapper[4872]: I0127 06:56:46.223765 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpbg9" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.436011 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt"] Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.437234 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.439856 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.440523 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.440728 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.441060 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.441394 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.441790 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.442697 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvmll\" (UniqueName: \"kubernetes.io/projected/24fb1248-cdba-4e18-aa8a-09af2a16b09b-kube-api-access-gvmll\") pod \"route-controller-manager-7f74f88567-9kqzt\" (UID: \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\") " pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.442810 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24fb1248-cdba-4e18-aa8a-09af2a16b09b-client-ca\") pod \"route-controller-manager-7f74f88567-9kqzt\" (UID: \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\") " pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.442902 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24fb1248-cdba-4e18-aa8a-09af2a16b09b-serving-cert\") pod \"route-controller-manager-7f74f88567-9kqzt\" (UID: \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\") " pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.442969 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24fb1248-cdba-4e18-aa8a-09af2a16b09b-config\") pod \"route-controller-manager-7f74f88567-9kqzt\" (UID: \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\") " pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.455105 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt"] Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.544485 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvmll\" (UniqueName: \"kubernetes.io/projected/24fb1248-cdba-4e18-aa8a-09af2a16b09b-kube-api-access-gvmll\") pod \"route-controller-manager-7f74f88567-9kqzt\" (UID: \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\") " pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.544610 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24fb1248-cdba-4e18-aa8a-09af2a16b09b-client-ca\") pod \"route-controller-manager-7f74f88567-9kqzt\" (UID: \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\") " pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.544681 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24fb1248-cdba-4e18-aa8a-09af2a16b09b-serving-cert\") pod \"route-controller-manager-7f74f88567-9kqzt\" (UID: \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\") " pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.544716 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24fb1248-cdba-4e18-aa8a-09af2a16b09b-config\") pod \"route-controller-manager-7f74f88567-9kqzt\" (UID: \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\") " pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.545626 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24fb1248-cdba-4e18-aa8a-09af2a16b09b-client-ca\") pod \"route-controller-manager-7f74f88567-9kqzt\" (UID: \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\") " pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.547886 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24fb1248-cdba-4e18-aa8a-09af2a16b09b-config\") pod \"route-controller-manager-7f74f88567-9kqzt\" (UID: \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\") " pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.552572 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24fb1248-cdba-4e18-aa8a-09af2a16b09b-serving-cert\") pod \"route-controller-manager-7f74f88567-9kqzt\" (UID: \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\") " pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.560913 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvmll\" (UniqueName: \"kubernetes.io/projected/24fb1248-cdba-4e18-aa8a-09af2a16b09b-kube-api-access-gvmll\") pod \"route-controller-manager-7f74f88567-9kqzt\" (UID: \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\") " pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:47 crc kubenswrapper[4872]: I0127 06:56:47.820858 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:51 crc kubenswrapper[4872]: I0127 06:56:51.358280 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-568c95bf5d-bc8ng"] Jan 27 06:56:51 crc kubenswrapper[4872]: I0127 06:56:51.446208 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt"] Jan 27 06:56:52 crc kubenswrapper[4872]: I0127 06:56:52.431704 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:56:52 crc kubenswrapper[4872]: E0127 06:56:52.773298 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 06:56:52 crc kubenswrapper[4872]: E0127 06:56:52.773497 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmpt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8r2pn_openshift-marketplace(d6feabd3-81a6-4ef5-b7a7-5df917fefcc9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:56:52 crc kubenswrapper[4872]: E0127 06:56:52.774654 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8r2pn" podUID="d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" Jan 27 06:56:52 crc kubenswrapper[4872]: E0127 06:56:52.824105 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 06:56:52 crc kubenswrapper[4872]: E0127 06:56:52.824319 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8fpw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mm92d_openshift-marketplace(7e6a557b-5fc3-46e7-9799-1b75567c2dc6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:56:52 crc kubenswrapper[4872]: E0127 06:56:52.825496 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mm92d" podUID="7e6a557b-5fc3-46e7-9799-1b75567c2dc6" Jan 27 06:56:54 crc kubenswrapper[4872]: E0127 06:56:54.149863 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8r2pn" podUID="d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" Jan 27 06:56:54 crc kubenswrapper[4872]: E0127 06:56:54.149875 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mm92d" podUID="7e6a557b-5fc3-46e7-9799-1b75567c2dc6" Jan 27 06:56:54 crc kubenswrapper[4872]: E0127 06:56:54.218081 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 06:56:54 crc kubenswrapper[4872]: E0127 06:56:54.218219 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcc5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-pp8h2_openshift-marketplace(59197c9d-c3b2-4ca8-875b-f1264d3b8d7b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:56:54 crc kubenswrapper[4872]: E0127 06:56:54.219424 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-pp8h2" podUID="59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" Jan 27 06:56:54 crc kubenswrapper[4872]: I0127 06:56:54.279350 4872 scope.go:117] "RemoveContainer" containerID="f386be220afaddddf5ea643f6e93ef1675a15111df565c4dd44d9c89a5525e23" Jan 27 06:56:54 crc kubenswrapper[4872]: E0127 06:56:54.333328 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 06:56:54 crc kubenswrapper[4872]: E0127 06:56:54.333481 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2kgp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cvk24_openshift-marketplace(cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:56:54 crc kubenswrapper[4872]: E0127 06:56:54.334654 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cvk24" podUID="cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" Jan 27 06:56:54 crc kubenswrapper[4872]: I0127 06:56:54.702004 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt"] Jan 27 06:56:54 crc kubenswrapper[4872]: W0127 06:56:54.831205 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39408865_e291_4366_891c_db94549bad1f.slice/crio-2abd841e4911cfe944e9da8c2d4a7a0fcb094827c075e468039a6e73f1d8ab51 WatchSource:0}: Error finding container 2abd841e4911cfe944e9da8c2d4a7a0fcb094827c075e468039a6e73f1d8ab51: Status 404 returned error can't find the container with id 2abd841e4911cfe944e9da8c2d4a7a0fcb094827c075e468039a6e73f1d8ab51 Jan 27 06:56:54 crc kubenswrapper[4872]: I0127 06:56:54.831348 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-568c95bf5d-bc8ng"] Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.001111 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.001173 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.216239 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthrz" event={"ID":"a605f814-0bb0-4f00-88c9-57a7c47f7ece","Type":"ContainerStarted","Data":"768ee19f12b3d4763ecf9a8f805ddd30c386c5b3c453872265c55bac6b40150f"} Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.219093 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tltfz" event={"ID":"02c45bcc-19a3-41d7-b7dc-b954a7f2301a","Type":"ContainerStarted","Data":"e4095c82259b4b562ca8960f969376cbb0841250e0803b371b1cb873922d2c0a"} Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.220891 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" podUID="24fb1248-cdba-4e18-aa8a-09af2a16b09b" containerName="route-controller-manager" containerID="cri-o://f5f85a53a0636f627b7ab44d16ac117e0a9fc0f4f53c6e899a046f9cfd9a8543" gracePeriod=30 Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.221184 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" event={"ID":"24fb1248-cdba-4e18-aa8a-09af2a16b09b","Type":"ContainerStarted","Data":"f5f85a53a0636f627b7ab44d16ac117e0a9fc0f4f53c6e899a046f9cfd9a8543"} Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.221218 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" event={"ID":"24fb1248-cdba-4e18-aa8a-09af2a16b09b","Type":"ContainerStarted","Data":"5a0ada4c3485b63471191f5cc7a6c6c7548a8321cc25b47499618eb074baf558"} Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.221598 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.226014 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" event={"ID":"39408865-e291-4366-891c-db94549bad1f","Type":"ContainerStarted","Data":"2abd841e4911cfe944e9da8c2d4a7a0fcb094827c075e468039a6e73f1d8ab51"} Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.228568 4872 generic.go:334] "Generic (PLEG): container finished" podID="22382085-00d5-42cd-97fb-098b131498d6" containerID="b6153cf7751dea6874b3b5934378533565c3adef32a86d2c33955cfca9c91552" exitCode=0 Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.228630 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k9dq" event={"ID":"22382085-00d5-42cd-97fb-098b131498d6","Type":"ContainerDied","Data":"b6153cf7751dea6874b3b5934378533565c3adef32a86d2c33955cfca9c91552"} Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.231608 4872 generic.go:334] "Generic (PLEG): container finished" podID="e29c05a4-c51b-4ef3-b341-fca8f5f68b53" containerID="5196e855d417c339f648ecdadab3932dc60cba7c4e7084f896fa107c48720d40" exitCode=0 Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.231690 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-854hx" event={"ID":"e29c05a4-c51b-4ef3-b341-fca8f5f68b53","Type":"ContainerDied","Data":"5196e855d417c339f648ecdadab3932dc60cba7c4e7084f896fa107c48720d40"} Jan 27 06:56:55 crc kubenswrapper[4872]: E0127 06:56:55.241774 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-cvk24" podUID="cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" Jan 27 06:56:55 crc kubenswrapper[4872]: E0127 06:56:55.242446 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-pp8h2" podUID="59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.268770 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" podStartSLOduration=24.268742688 podStartE2EDuration="24.268742688s" podCreationTimestamp="2026-01-27 06:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:55.268278115 +0000 UTC m=+191.795753311" watchObservedRunningTime="2026-01-27 06:56:55.268742688 +0000 UTC m=+191.796217884" Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.286930 4872 patch_prober.go:28] interesting pod/route-controller-manager-7f74f88567-9kqzt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:53878->10.217.0.55:8443: read: connection reset by peer" start-of-body= Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.287026 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" podUID="24fb1248-cdba-4e18-aa8a-09af2a16b09b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:53878->10.217.0.55:8443: read: connection reset by peer" Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.377516 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.378484 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.381576 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.381698 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.394314 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.447260 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39a742e2-e091-4c88-8e06-df37b6ffa480-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"39a742e2-e091-4c88-8e06-df37b6ffa480\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.447545 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39a742e2-e091-4c88-8e06-df37b6ffa480-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"39a742e2-e091-4c88-8e06-df37b6ffa480\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.548605 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39a742e2-e091-4c88-8e06-df37b6ffa480-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"39a742e2-e091-4c88-8e06-df37b6ffa480\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.548664 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39a742e2-e091-4c88-8e06-df37b6ffa480-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"39a742e2-e091-4c88-8e06-df37b6ffa480\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.549117 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39a742e2-e091-4c88-8e06-df37b6ffa480-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"39a742e2-e091-4c88-8e06-df37b6ffa480\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.569425 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39a742e2-e091-4c88-8e06-df37b6ffa480-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"39a742e2-e091-4c88-8e06-df37b6ffa480\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:56:55 crc kubenswrapper[4872]: I0127 06:56:55.700103 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.199815 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 06:56:56 crc kubenswrapper[4872]: W0127 06:56:56.210691 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod39a742e2_e091_4c88_8e06_df37b6ffa480.slice/crio-0653ddf96201313681674f9ed50d1685f5cba046d694fa5c0537a0046db64137 WatchSource:0}: Error finding container 0653ddf96201313681674f9ed50d1685f5cba046d694fa5c0537a0046db64137: Status 404 returned error can't find the container with id 0653ddf96201313681674f9ed50d1685f5cba046d694fa5c0537a0046db64137 Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.242973 4872 generic.go:334] "Generic (PLEG): container finished" podID="02c45bcc-19a3-41d7-b7dc-b954a7f2301a" containerID="e4095c82259b4b562ca8960f969376cbb0841250e0803b371b1cb873922d2c0a" exitCode=0 Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.243063 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tltfz" event={"ID":"02c45bcc-19a3-41d7-b7dc-b954a7f2301a","Type":"ContainerDied","Data":"e4095c82259b4b562ca8960f969376cbb0841250e0803b371b1cb873922d2c0a"} Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.245283 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"39a742e2-e091-4c88-8e06-df37b6ffa480","Type":"ContainerStarted","Data":"0653ddf96201313681674f9ed50d1685f5cba046d694fa5c0537a0046db64137"} Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.247084 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f74f88567-9kqzt_24fb1248-cdba-4e18-aa8a-09af2a16b09b/route-controller-manager/0.log" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.247135 4872 generic.go:334] "Generic (PLEG): container finished" podID="24fb1248-cdba-4e18-aa8a-09af2a16b09b" containerID="f5f85a53a0636f627b7ab44d16ac117e0a9fc0f4f53c6e899a046f9cfd9a8543" exitCode=255 Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.247195 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" event={"ID":"24fb1248-cdba-4e18-aa8a-09af2a16b09b","Type":"ContainerDied","Data":"f5f85a53a0636f627b7ab44d16ac117e0a9fc0f4f53c6e899a046f9cfd9a8543"} Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.252404 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" event={"ID":"39408865-e291-4366-891c-db94549bad1f","Type":"ContainerStarted","Data":"8791fe568db7398adaa498b2983c0dd11d521bac8ef334d983953edc11eae605"} Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.252544 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" podUID="39408865-e291-4366-891c-db94549bad1f" containerName="controller-manager" containerID="cri-o://8791fe568db7398adaa498b2983c0dd11d521bac8ef334d983953edc11eae605" gracePeriod=30 Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.253063 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.255938 4872 generic.go:334] "Generic (PLEG): container finished" podID="a605f814-0bb0-4f00-88c9-57a7c47f7ece" containerID="768ee19f12b3d4763ecf9a8f805ddd30c386c5b3c453872265c55bac6b40150f" exitCode=0 Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.255972 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthrz" event={"ID":"a605f814-0bb0-4f00-88c9-57a7c47f7ece","Type":"ContainerDied","Data":"768ee19f12b3d4763ecf9a8f805ddd30c386c5b3c453872265c55bac6b40150f"} Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.261781 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.295407 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" podStartSLOduration=25.295381827 podStartE2EDuration="25.295381827s" podCreationTimestamp="2026-01-27 06:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:56.291394205 +0000 UTC m=+192.818869401" watchObservedRunningTime="2026-01-27 06:56:56.295381827 +0000 UTC m=+192.822857023" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.585925 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f74f88567-9kqzt_24fb1248-cdba-4e18-aa8a-09af2a16b09b/route-controller-manager/0.log" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.586446 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.614327 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw"] Jan 27 06:56:56 crc kubenswrapper[4872]: E0127 06:56:56.614530 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24fb1248-cdba-4e18-aa8a-09af2a16b09b" containerName="route-controller-manager" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.614543 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="24fb1248-cdba-4e18-aa8a-09af2a16b09b" containerName="route-controller-manager" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.614672 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="24fb1248-cdba-4e18-aa8a-09af2a16b09b" containerName="route-controller-manager" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.615028 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.631541 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw"] Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.673288 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24fb1248-cdba-4e18-aa8a-09af2a16b09b-client-ca\") pod \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\" (UID: \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\") " Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.673632 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24fb1248-cdba-4e18-aa8a-09af2a16b09b-config\") pod \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\" (UID: \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\") " Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.673739 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24fb1248-cdba-4e18-aa8a-09af2a16b09b-serving-cert\") pod \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\" (UID: \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\") " Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.673886 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvmll\" (UniqueName: \"kubernetes.io/projected/24fb1248-cdba-4e18-aa8a-09af2a16b09b-kube-api-access-gvmll\") pod \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\" (UID: \"24fb1248-cdba-4e18-aa8a-09af2a16b09b\") " Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.674146 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c6e9aac-6703-44ed-af38-bc1aaf478223-config\") pod \"route-controller-manager-cdf57d98f-nqgbw\" (UID: \"1c6e9aac-6703-44ed-af38-bc1aaf478223\") " pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.674287 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6g7q\" (UniqueName: \"kubernetes.io/projected/1c6e9aac-6703-44ed-af38-bc1aaf478223-kube-api-access-v6g7q\") pod \"route-controller-manager-cdf57d98f-nqgbw\" (UID: \"1c6e9aac-6703-44ed-af38-bc1aaf478223\") " pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.674390 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c6e9aac-6703-44ed-af38-bc1aaf478223-serving-cert\") pod \"route-controller-manager-cdf57d98f-nqgbw\" (UID: \"1c6e9aac-6703-44ed-af38-bc1aaf478223\") " pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.674513 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c6e9aac-6703-44ed-af38-bc1aaf478223-client-ca\") pod \"route-controller-manager-cdf57d98f-nqgbw\" (UID: \"1c6e9aac-6703-44ed-af38-bc1aaf478223\") " pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.674428 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24fb1248-cdba-4e18-aa8a-09af2a16b09b-client-ca" (OuterVolumeSpecName: "client-ca") pod "24fb1248-cdba-4e18-aa8a-09af2a16b09b" (UID: "24fb1248-cdba-4e18-aa8a-09af2a16b09b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.675234 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24fb1248-cdba-4e18-aa8a-09af2a16b09b-config" (OuterVolumeSpecName: "config") pod "24fb1248-cdba-4e18-aa8a-09af2a16b09b" (UID: "24fb1248-cdba-4e18-aa8a-09af2a16b09b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.679718 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24fb1248-cdba-4e18-aa8a-09af2a16b09b-kube-api-access-gvmll" (OuterVolumeSpecName: "kube-api-access-gvmll") pod "24fb1248-cdba-4e18-aa8a-09af2a16b09b" (UID: "24fb1248-cdba-4e18-aa8a-09af2a16b09b"). InnerVolumeSpecName "kube-api-access-gvmll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.679900 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fb1248-cdba-4e18-aa8a-09af2a16b09b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "24fb1248-cdba-4e18-aa8a-09af2a16b09b" (UID: "24fb1248-cdba-4e18-aa8a-09af2a16b09b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.775445 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c6e9aac-6703-44ed-af38-bc1aaf478223-serving-cert\") pod \"route-controller-manager-cdf57d98f-nqgbw\" (UID: \"1c6e9aac-6703-44ed-af38-bc1aaf478223\") " pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.775489 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c6e9aac-6703-44ed-af38-bc1aaf478223-client-ca\") pod \"route-controller-manager-cdf57d98f-nqgbw\" (UID: \"1c6e9aac-6703-44ed-af38-bc1aaf478223\") " pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.775532 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c6e9aac-6703-44ed-af38-bc1aaf478223-config\") pod \"route-controller-manager-cdf57d98f-nqgbw\" (UID: \"1c6e9aac-6703-44ed-af38-bc1aaf478223\") " pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.775569 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6g7q\" (UniqueName: \"kubernetes.io/projected/1c6e9aac-6703-44ed-af38-bc1aaf478223-kube-api-access-v6g7q\") pod \"route-controller-manager-cdf57d98f-nqgbw\" (UID: \"1c6e9aac-6703-44ed-af38-bc1aaf478223\") " pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.775608 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24fb1248-cdba-4e18-aa8a-09af2a16b09b-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.775620 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24fb1248-cdba-4e18-aa8a-09af2a16b09b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.775629 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvmll\" (UniqueName: \"kubernetes.io/projected/24fb1248-cdba-4e18-aa8a-09af2a16b09b-kube-api-access-gvmll\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.775639 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24fb1248-cdba-4e18-aa8a-09af2a16b09b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.776929 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c6e9aac-6703-44ed-af38-bc1aaf478223-client-ca\") pod \"route-controller-manager-cdf57d98f-nqgbw\" (UID: \"1c6e9aac-6703-44ed-af38-bc1aaf478223\") " pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.777734 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c6e9aac-6703-44ed-af38-bc1aaf478223-config\") pod \"route-controller-manager-cdf57d98f-nqgbw\" (UID: \"1c6e9aac-6703-44ed-af38-bc1aaf478223\") " pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.779135 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c6e9aac-6703-44ed-af38-bc1aaf478223-serving-cert\") pod \"route-controller-manager-cdf57d98f-nqgbw\" (UID: \"1c6e9aac-6703-44ed-af38-bc1aaf478223\") " pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.792061 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6g7q\" (UniqueName: \"kubernetes.io/projected/1c6e9aac-6703-44ed-af38-bc1aaf478223-kube-api-access-v6g7q\") pod \"route-controller-manager-cdf57d98f-nqgbw\" (UID: \"1c6e9aac-6703-44ed-af38-bc1aaf478223\") " pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:56:56 crc kubenswrapper[4872]: I0127 06:56:56.947748 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.263633 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"39a742e2-e091-4c88-8e06-df37b6ffa480","Type":"ContainerStarted","Data":"ce6d1026a6485ab27a75e156ab9159d0860b56f000adb8e5bbf2cc9679b36bbe"} Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.265371 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f74f88567-9kqzt_24fb1248-cdba-4e18-aa8a-09af2a16b09b/route-controller-manager/0.log" Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.265521 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.265539 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt" event={"ID":"24fb1248-cdba-4e18-aa8a-09af2a16b09b","Type":"ContainerDied","Data":"5a0ada4c3485b63471191f5cc7a6c6c7548a8321cc25b47499618eb074baf558"} Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.265608 4872 scope.go:117] "RemoveContainer" containerID="f5f85a53a0636f627b7ab44d16ac117e0a9fc0f4f53c6e899a046f9cfd9a8543" Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.270076 4872 generic.go:334] "Generic (PLEG): container finished" podID="39408865-e291-4366-891c-db94549bad1f" containerID="8791fe568db7398adaa498b2983c0dd11d521bac8ef334d983953edc11eae605" exitCode=0 Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.270138 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" event={"ID":"39408865-e291-4366-891c-db94549bad1f","Type":"ContainerDied","Data":"8791fe568db7398adaa498b2983c0dd11d521bac8ef334d983953edc11eae605"} Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.297798 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt"] Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.303893 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f74f88567-9kqzt"] Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.370145 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw"] Jan 27 06:56:57 crc kubenswrapper[4872]: W0127 06:56:57.395950 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c6e9aac_6703_44ed_af38_bc1aaf478223.slice/crio-87f7eb71604b7a6be548196777f01c6fa6dbda2bfc57f84469e344ebd7db1bc8 WatchSource:0}: Error finding container 87f7eb71604b7a6be548196777f01c6fa6dbda2bfc57f84469e344ebd7db1bc8: Status 404 returned error can't find the container with id 87f7eb71604b7a6be548196777f01c6fa6dbda2bfc57f84469e344ebd7db1bc8 Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.554405 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.590023 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5dk2\" (UniqueName: \"kubernetes.io/projected/39408865-e291-4366-891c-db94549bad1f-kube-api-access-p5dk2\") pod \"39408865-e291-4366-891c-db94549bad1f\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.590088 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39408865-e291-4366-891c-db94549bad1f-serving-cert\") pod \"39408865-e291-4366-891c-db94549bad1f\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.590127 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-config\") pod \"39408865-e291-4366-891c-db94549bad1f\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.590163 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-proxy-ca-bundles\") pod \"39408865-e291-4366-891c-db94549bad1f\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.590206 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-client-ca\") pod \"39408865-e291-4366-891c-db94549bad1f\" (UID: \"39408865-e291-4366-891c-db94549bad1f\") " Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.592513 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "39408865-e291-4366-891c-db94549bad1f" (UID: "39408865-e291-4366-891c-db94549bad1f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.592579 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-config" (OuterVolumeSpecName: "config") pod "39408865-e291-4366-891c-db94549bad1f" (UID: "39408865-e291-4366-891c-db94549bad1f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.593611 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-client-ca" (OuterVolumeSpecName: "client-ca") pod "39408865-e291-4366-891c-db94549bad1f" (UID: "39408865-e291-4366-891c-db94549bad1f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.596385 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39408865-e291-4366-891c-db94549bad1f-kube-api-access-p5dk2" (OuterVolumeSpecName: "kube-api-access-p5dk2") pod "39408865-e291-4366-891c-db94549bad1f" (UID: "39408865-e291-4366-891c-db94549bad1f"). InnerVolumeSpecName "kube-api-access-p5dk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.597866 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39408865-e291-4366-891c-db94549bad1f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "39408865-e291-4366-891c-db94549bad1f" (UID: "39408865-e291-4366-891c-db94549bad1f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.691920 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39408865-e291-4366-891c-db94549bad1f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.691955 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.691963 4872 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.691974 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39408865-e291-4366-891c-db94549bad1f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:57 crc kubenswrapper[4872]: I0127 06:56:57.691983 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5dk2\" (UniqueName: \"kubernetes.io/projected/39408865-e291-4366-891c-db94549bad1f-kube-api-access-p5dk2\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:58 crc kubenswrapper[4872]: I0127 06:56:58.105053 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24fb1248-cdba-4e18-aa8a-09af2a16b09b" path="/var/lib/kubelet/pods/24fb1248-cdba-4e18-aa8a-09af2a16b09b/volumes" Jan 27 06:56:58 crc kubenswrapper[4872]: I0127 06:56:58.279475 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" Jan 27 06:56:58 crc kubenswrapper[4872]: I0127 06:56:58.280142 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568c95bf5d-bc8ng" event={"ID":"39408865-e291-4366-891c-db94549bad1f","Type":"ContainerDied","Data":"2abd841e4911cfe944e9da8c2d4a7a0fcb094827c075e468039a6e73f1d8ab51"} Jan 27 06:56:58 crc kubenswrapper[4872]: I0127 06:56:58.280186 4872 scope.go:117] "RemoveContainer" containerID="8791fe568db7398adaa498b2983c0dd11d521bac8ef334d983953edc11eae605" Jan 27 06:56:58 crc kubenswrapper[4872]: I0127 06:56:58.297873 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" event={"ID":"1c6e9aac-6703-44ed-af38-bc1aaf478223","Type":"ContainerStarted","Data":"bb111da8bc23b83ff7a68f0c891e47ff454879501c423c7e0cf5934851808faa"} Jan 27 06:56:58 crc kubenswrapper[4872]: I0127 06:56:58.298612 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:56:58 crc kubenswrapper[4872]: I0127 06:56:58.298640 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" event={"ID":"1c6e9aac-6703-44ed-af38-bc1aaf478223","Type":"ContainerStarted","Data":"87f7eb71604b7a6be548196777f01c6fa6dbda2bfc57f84469e344ebd7db1bc8"} Jan 27 06:56:58 crc kubenswrapper[4872]: I0127 06:56:58.301257 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-568c95bf5d-bc8ng"] Jan 27 06:56:58 crc kubenswrapper[4872]: I0127 06:56:58.304740 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-568c95bf5d-bc8ng"] Jan 27 06:56:58 crc kubenswrapper[4872]: I0127 06:56:58.306460 4872 generic.go:334] "Generic (PLEG): container finished" podID="39a742e2-e091-4c88-8e06-df37b6ffa480" containerID="ce6d1026a6485ab27a75e156ab9159d0860b56f000adb8e5bbf2cc9679b36bbe" exitCode=0 Jan 27 06:56:58 crc kubenswrapper[4872]: I0127 06:56:58.306516 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"39a742e2-e091-4c88-8e06-df37b6ffa480","Type":"ContainerDied","Data":"ce6d1026a6485ab27a75e156ab9159d0860b56f000adb8e5bbf2cc9679b36bbe"} Jan 27 06:56:58 crc kubenswrapper[4872]: I0127 06:56:58.339748 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:56:58 crc kubenswrapper[4872]: I0127 06:56:58.346326 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" podStartSLOduration=7.346308642 podStartE2EDuration="7.346308642s" podCreationTimestamp="2026-01-27 06:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:56:58.318407465 +0000 UTC m=+194.845882661" watchObservedRunningTime="2026-01-27 06:56:58.346308642 +0000 UTC m=+194.873783838" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.312586 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k9dq" event={"ID":"22382085-00d5-42cd-97fb-098b131498d6","Type":"ContainerStarted","Data":"36bb9be558e24f9ebbee1fc1799760e10926d52aa68f6d504423e4dfd5a3a628"} Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.332418 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9k9dq" podStartSLOduration=4.053105351 podStartE2EDuration="45.332401278s" podCreationTimestamp="2026-01-27 06:56:14 +0000 UTC" firstStartedPulling="2026-01-27 06:56:17.18598998 +0000 UTC m=+153.713465176" lastFinishedPulling="2026-01-27 06:56:58.465285917 +0000 UTC m=+194.992761103" observedRunningTime="2026-01-27 06:56:59.329481806 +0000 UTC m=+195.856957002" watchObservedRunningTime="2026-01-27 06:56:59.332401278 +0000 UTC m=+195.859876474" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.446441 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78956d596-5r2rl"] Jan 27 06:56:59 crc kubenswrapper[4872]: E0127 06:56:59.446654 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39408865-e291-4366-891c-db94549bad1f" containerName="controller-manager" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.446665 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="39408865-e291-4366-891c-db94549bad1f" containerName="controller-manager" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.446765 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="39408865-e291-4366-891c-db94549bad1f" containerName="controller-manager" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.447164 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.449084 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.460269 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.464035 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.464779 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.464923 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.465551 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.465680 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.469199 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78956d596-5r2rl"] Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.627207 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.637281 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39a742e2-e091-4c88-8e06-df37b6ffa480-kube-api-access\") pod \"39a742e2-e091-4c88-8e06-df37b6ffa480\" (UID: \"39a742e2-e091-4c88-8e06-df37b6ffa480\") " Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.637382 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39a742e2-e091-4c88-8e06-df37b6ffa480-kubelet-dir\") pod \"39a742e2-e091-4c88-8e06-df37b6ffa480\" (UID: \"39a742e2-e091-4c88-8e06-df37b6ffa480\") " Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.637425 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39a742e2-e091-4c88-8e06-df37b6ffa480-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "39a742e2-e091-4c88-8e06-df37b6ffa480" (UID: "39a742e2-e091-4c88-8e06-df37b6ffa480"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.637660 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-proxy-ca-bundles\") pod \"controller-manager-78956d596-5r2rl\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.637719 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51cb0c8c-bff2-4aed-9038-733f882dc2f7-serving-cert\") pod \"controller-manager-78956d596-5r2rl\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.637747 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-client-ca\") pod \"controller-manager-78956d596-5r2rl\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.637799 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-config\") pod \"controller-manager-78956d596-5r2rl\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.637943 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktnjf\" (UniqueName: \"kubernetes.io/projected/51cb0c8c-bff2-4aed-9038-733f882dc2f7-kube-api-access-ktnjf\") pod \"controller-manager-78956d596-5r2rl\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.637992 4872 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39a742e2-e091-4c88-8e06-df37b6ffa480-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.646041 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39a742e2-e091-4c88-8e06-df37b6ffa480-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "39a742e2-e091-4c88-8e06-df37b6ffa480" (UID: "39a742e2-e091-4c88-8e06-df37b6ffa480"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.738566 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-config\") pod \"controller-manager-78956d596-5r2rl\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.738621 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktnjf\" (UniqueName: \"kubernetes.io/projected/51cb0c8c-bff2-4aed-9038-733f882dc2f7-kube-api-access-ktnjf\") pod \"controller-manager-78956d596-5r2rl\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.738673 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-proxy-ca-bundles\") pod \"controller-manager-78956d596-5r2rl\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.738692 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51cb0c8c-bff2-4aed-9038-733f882dc2f7-serving-cert\") pod \"controller-manager-78956d596-5r2rl\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.738712 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-client-ca\") pod \"controller-manager-78956d596-5r2rl\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.739060 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39a742e2-e091-4c88-8e06-df37b6ffa480-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.739796 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-client-ca\") pod \"controller-manager-78956d596-5r2rl\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.740487 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-proxy-ca-bundles\") pod \"controller-manager-78956d596-5r2rl\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.744785 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-config\") pod \"controller-manager-78956d596-5r2rl\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.746059 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51cb0c8c-bff2-4aed-9038-733f882dc2f7-serving-cert\") pod \"controller-manager-78956d596-5r2rl\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.754553 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktnjf\" (UniqueName: \"kubernetes.io/projected/51cb0c8c-bff2-4aed-9038-733f882dc2f7-kube-api-access-ktnjf\") pod \"controller-manager-78956d596-5r2rl\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:56:59 crc kubenswrapper[4872]: I0127 06:56:59.791445 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:57:00 crc kubenswrapper[4872]: I0127 06:57:00.105562 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39408865-e291-4366-891c-db94549bad1f" path="/var/lib/kubelet/pods/39408865-e291-4366-891c-db94549bad1f/volumes" Jan 27 06:57:00 crc kubenswrapper[4872]: I0127 06:57:00.320212 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"39a742e2-e091-4c88-8e06-df37b6ffa480","Type":"ContainerDied","Data":"0653ddf96201313681674f9ed50d1685f5cba046d694fa5c0537a0046db64137"} Jan 27 06:57:00 crc kubenswrapper[4872]: I0127 06:57:00.320258 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0653ddf96201313681674f9ed50d1685f5cba046d694fa5c0537a0046db64137" Jan 27 06:57:00 crc kubenswrapper[4872]: I0127 06:57:00.320315 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:57:00 crc kubenswrapper[4872]: I0127 06:57:00.322565 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-854hx" event={"ID":"e29c05a4-c51b-4ef3-b341-fca8f5f68b53","Type":"ContainerStarted","Data":"b61d1b9a4106ead5dc1acf913afe88644efe386d2064f415cd3099cb807e1576"} Jan 27 06:57:00 crc kubenswrapper[4872]: I0127 06:57:00.344343 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-854hx" podStartSLOduration=5.483155187 podStartE2EDuration="46.344324823s" podCreationTimestamp="2026-01-27 06:56:14 +0000 UTC" firstStartedPulling="2026-01-27 06:56:18.263512605 +0000 UTC m=+154.790987801" lastFinishedPulling="2026-01-27 06:56:59.124682231 +0000 UTC m=+195.652157437" observedRunningTime="2026-01-27 06:57:00.343874591 +0000 UTC m=+196.871349777" watchObservedRunningTime="2026-01-27 06:57:00.344324823 +0000 UTC m=+196.871800019" Jan 27 06:57:01 crc kubenswrapper[4872]: I0127 06:57:01.829556 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78956d596-5r2rl"] Jan 27 06:57:01 crc kubenswrapper[4872]: W0127 06:57:01.848933 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51cb0c8c_bff2_4aed_9038_733f882dc2f7.slice/crio-9e342993a92e871c906f2cf9583dd8bd94013e1e080a89d0f2d4ffa2d3aecdb6 WatchSource:0}: Error finding container 9e342993a92e871c906f2cf9583dd8bd94013e1e080a89d0f2d4ffa2d3aecdb6: Status 404 returned error can't find the container with id 9e342993a92e871c906f2cf9583dd8bd94013e1e080a89d0f2d4ffa2d3aecdb6 Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.333054 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" event={"ID":"51cb0c8c-bff2-4aed-9038-733f882dc2f7","Type":"ContainerStarted","Data":"5a3396e58ccb076dbc83ce927ea16dc9cbfcfdd4979b0956fbcadde73c70891b"} Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.333408 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.333436 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" event={"ID":"51cb0c8c-bff2-4aed-9038-733f882dc2f7","Type":"ContainerStarted","Data":"9e342993a92e871c906f2cf9583dd8bd94013e1e080a89d0f2d4ffa2d3aecdb6"} Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.335666 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tltfz" event={"ID":"02c45bcc-19a3-41d7-b7dc-b954a7f2301a","Type":"ContainerStarted","Data":"2ac1399da8c9c2f1577788f12e8a930e5176a63a1a1a01d1289636366f235beb"} Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.337177 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthrz" event={"ID":"a605f814-0bb0-4f00-88c9-57a7c47f7ece","Type":"ContainerStarted","Data":"7ce2b73c8ea5c00d8dde397f7f6242a6049fe22f7d4aec243b70e06141708030"} Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.341545 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.356469 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" podStartSLOduration=11.356452683 podStartE2EDuration="11.356452683s" podCreationTimestamp="2026-01-27 06:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:57:02.354352605 +0000 UTC m=+198.881827801" watchObservedRunningTime="2026-01-27 06:57:02.356452683 +0000 UTC m=+198.883927879" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.405796 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hthrz" podStartSLOduration=4.947469612 podStartE2EDuration="48.405780032s" podCreationTimestamp="2026-01-27 06:56:14 +0000 UTC" firstStartedPulling="2026-01-27 06:56:18.276279305 +0000 UTC m=+154.803754501" lastFinishedPulling="2026-01-27 06:57:01.734589725 +0000 UTC m=+198.262064921" observedRunningTime="2026-01-27 06:57:02.403640433 +0000 UTC m=+198.931115639" watchObservedRunningTime="2026-01-27 06:57:02.405780032 +0000 UTC m=+198.933255218" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.777241 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tltfz" podStartSLOduration=5.450253789 podStartE2EDuration="45.777218211s" podCreationTimestamp="2026-01-27 06:56:17 +0000 UTC" firstStartedPulling="2026-01-27 06:56:20.524110331 +0000 UTC m=+157.051585527" lastFinishedPulling="2026-01-27 06:57:00.851074753 +0000 UTC m=+197.378549949" observedRunningTime="2026-01-27 06:57:02.436675549 +0000 UTC m=+198.964150745" watchObservedRunningTime="2026-01-27 06:57:02.777218211 +0000 UTC m=+199.304693397" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.778249 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 06:57:02 crc kubenswrapper[4872]: E0127 06:57:02.778468 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a742e2-e091-4c88-8e06-df37b6ffa480" containerName="pruner" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.778488 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a742e2-e091-4c88-8e06-df37b6ffa480" containerName="pruner" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.778635 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="39a742e2-e091-4c88-8e06-df37b6ffa480" containerName="pruner" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.779154 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.782648 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.784479 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.798727 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.875789 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5e764f4a-dc17-468f-9f40-da1cd46c4098-var-lock\") pod \"installer-9-crc\" (UID: \"5e764f4a-dc17-468f-9f40-da1cd46c4098\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.876206 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e764f4a-dc17-468f-9f40-da1cd46c4098-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5e764f4a-dc17-468f-9f40-da1cd46c4098\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.876253 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e764f4a-dc17-468f-9f40-da1cd46c4098-kube-api-access\") pod \"installer-9-crc\" (UID: \"5e764f4a-dc17-468f-9f40-da1cd46c4098\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.977977 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e764f4a-dc17-468f-9f40-da1cd46c4098-kube-api-access\") pod \"installer-9-crc\" (UID: \"5e764f4a-dc17-468f-9f40-da1cd46c4098\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.978053 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5e764f4a-dc17-468f-9f40-da1cd46c4098-var-lock\") pod \"installer-9-crc\" (UID: \"5e764f4a-dc17-468f-9f40-da1cd46c4098\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.978084 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e764f4a-dc17-468f-9f40-da1cd46c4098-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5e764f4a-dc17-468f-9f40-da1cd46c4098\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.978156 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e764f4a-dc17-468f-9f40-da1cd46c4098-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5e764f4a-dc17-468f-9f40-da1cd46c4098\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.978156 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5e764f4a-dc17-468f-9f40-da1cd46c4098-var-lock\") pod \"installer-9-crc\" (UID: \"5e764f4a-dc17-468f-9f40-da1cd46c4098\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:57:02 crc kubenswrapper[4872]: I0127 06:57:02.999502 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e764f4a-dc17-468f-9f40-da1cd46c4098-kube-api-access\") pod \"installer-9-crc\" (UID: \"5e764f4a-dc17-468f-9f40-da1cd46c4098\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:57:03 crc kubenswrapper[4872]: I0127 06:57:03.094504 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:57:03 crc kubenswrapper[4872]: I0127 06:57:03.530136 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 06:57:03 crc kubenswrapper[4872]: W0127 06:57:03.547077 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5e764f4a_dc17_468f_9f40_da1cd46c4098.slice/crio-bc3d1e0b589a56059ec5b1e3a666cd9cde4bf0799e61205635dda58dda5825aa WatchSource:0}: Error finding container bc3d1e0b589a56059ec5b1e3a666cd9cde4bf0799e61205635dda58dda5825aa: Status 404 returned error can't find the container with id bc3d1e0b589a56059ec5b1e3a666cd9cde4bf0799e61205635dda58dda5825aa Jan 27 06:57:04 crc kubenswrapper[4872]: I0127 06:57:04.349283 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5e764f4a-dc17-468f-9f40-da1cd46c4098","Type":"ContainerStarted","Data":"2af3c5ed4af28705b77e48e6a06c2bb00619465facb4dd7b15328c2b0836bd2f"} Jan 27 06:57:04 crc kubenswrapper[4872]: I0127 06:57:04.349336 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5e764f4a-dc17-468f-9f40-da1cd46c4098","Type":"ContainerStarted","Data":"bc3d1e0b589a56059ec5b1e3a666cd9cde4bf0799e61205635dda58dda5825aa"} Jan 27 06:57:04 crc kubenswrapper[4872]: I0127 06:57:04.367320 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.367300429 podStartE2EDuration="2.367300429s" podCreationTimestamp="2026-01-27 06:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:57:04.362977049 +0000 UTC m=+200.890452245" watchObservedRunningTime="2026-01-27 06:57:04.367300429 +0000 UTC m=+200.894775625" Jan 27 06:57:04 crc kubenswrapper[4872]: I0127 06:57:04.776554 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:57:04 crc kubenswrapper[4872]: I0127 06:57:04.776879 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:57:05 crc kubenswrapper[4872]: I0127 06:57:05.003937 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:57:05 crc kubenswrapper[4872]: I0127 06:57:05.305126 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:57:05 crc kubenswrapper[4872]: I0127 06:57:05.306285 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-854hx" Jan 27 06:57:05 crc kubenswrapper[4872]: I0127 06:57:05.306378 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:57:05 crc kubenswrapper[4872]: I0127 06:57:05.306670 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-854hx" Jan 27 06:57:05 crc kubenswrapper[4872]: I0127 06:57:05.349399 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:57:05 crc kubenswrapper[4872]: I0127 06:57:05.354732 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-854hx" Jan 27 06:57:05 crc kubenswrapper[4872]: I0127 06:57:05.428052 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:57:06 crc kubenswrapper[4872]: I0127 06:57:06.416319 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-854hx" Jan 27 06:57:07 crc kubenswrapper[4872]: I0127 06:57:07.448617 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-854hx"] Jan 27 06:57:08 crc kubenswrapper[4872]: I0127 06:57:08.369104 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-854hx" podUID="e29c05a4-c51b-4ef3-b341-fca8f5f68b53" containerName="registry-server" containerID="cri-o://b61d1b9a4106ead5dc1acf913afe88644efe386d2064f415cd3099cb807e1576" gracePeriod=2 Jan 27 06:57:08 crc kubenswrapper[4872]: I0127 06:57:08.382280 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:57:08 crc kubenswrapper[4872]: I0127 06:57:08.382401 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.166591 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-854hx" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.251066 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-utilities\") pod \"e29c05a4-c51b-4ef3-b341-fca8f5f68b53\" (UID: \"e29c05a4-c51b-4ef3-b341-fca8f5f68b53\") " Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.251342 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmkxr\" (UniqueName: \"kubernetes.io/projected/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-kube-api-access-qmkxr\") pod \"e29c05a4-c51b-4ef3-b341-fca8f5f68b53\" (UID: \"e29c05a4-c51b-4ef3-b341-fca8f5f68b53\") " Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.252593 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-catalog-content\") pod \"e29c05a4-c51b-4ef3-b341-fca8f5f68b53\" (UID: \"e29c05a4-c51b-4ef3-b341-fca8f5f68b53\") " Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.253022 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-utilities" (OuterVolumeSpecName: "utilities") pod "e29c05a4-c51b-4ef3-b341-fca8f5f68b53" (UID: "e29c05a4-c51b-4ef3-b341-fca8f5f68b53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.258499 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-kube-api-access-qmkxr" (OuterVolumeSpecName: "kube-api-access-qmkxr") pod "e29c05a4-c51b-4ef3-b341-fca8f5f68b53" (UID: "e29c05a4-c51b-4ef3-b341-fca8f5f68b53"). InnerVolumeSpecName "kube-api-access-qmkxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.306656 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e29c05a4-c51b-4ef3-b341-fca8f5f68b53" (UID: "e29c05a4-c51b-4ef3-b341-fca8f5f68b53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.354440 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.354472 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmkxr\" (UniqueName: \"kubernetes.io/projected/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-kube-api-access-qmkxr\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.354486 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e29c05a4-c51b-4ef3-b341-fca8f5f68b53-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.400127 4872 generic.go:334] "Generic (PLEG): container finished" podID="e29c05a4-c51b-4ef3-b341-fca8f5f68b53" containerID="b61d1b9a4106ead5dc1acf913afe88644efe386d2064f415cd3099cb807e1576" exitCode=0 Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.400270 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-854hx" event={"ID":"e29c05a4-c51b-4ef3-b341-fca8f5f68b53","Type":"ContainerDied","Data":"b61d1b9a4106ead5dc1acf913afe88644efe386d2064f415cd3099cb807e1576"} Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.400408 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-854hx" event={"ID":"e29c05a4-c51b-4ef3-b341-fca8f5f68b53","Type":"ContainerDied","Data":"7e707819e52f33703161021d94926136719b57518f1965853cfb13d8c18e21dc"} Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.400431 4872 scope.go:117] "RemoveContainer" containerID="b61d1b9a4106ead5dc1acf913afe88644efe386d2064f415cd3099cb807e1576" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.400883 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-854hx" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.422059 4872 scope.go:117] "RemoveContainer" containerID="5196e855d417c339f648ecdadab3932dc60cba7c4e7084f896fa107c48720d40" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.424915 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tltfz" podUID="02c45bcc-19a3-41d7-b7dc-b954a7f2301a" containerName="registry-server" probeResult="failure" output=< Jan 27 06:57:09 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Jan 27 06:57:09 crc kubenswrapper[4872]: > Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.443263 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-854hx"] Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.447223 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-854hx"] Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.457050 4872 scope.go:117] "RemoveContainer" containerID="989b325d42f01ab40374e6cafa943a164d13c1907c18dd4e19464ab51e0788b0" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.473760 4872 scope.go:117] "RemoveContainer" containerID="b61d1b9a4106ead5dc1acf913afe88644efe386d2064f415cd3099cb807e1576" Jan 27 06:57:09 crc kubenswrapper[4872]: E0127 06:57:09.475002 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b61d1b9a4106ead5dc1acf913afe88644efe386d2064f415cd3099cb807e1576\": container with ID starting with b61d1b9a4106ead5dc1acf913afe88644efe386d2064f415cd3099cb807e1576 not found: ID does not exist" containerID="b61d1b9a4106ead5dc1acf913afe88644efe386d2064f415cd3099cb807e1576" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.475026 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b61d1b9a4106ead5dc1acf913afe88644efe386d2064f415cd3099cb807e1576"} err="failed to get container status \"b61d1b9a4106ead5dc1acf913afe88644efe386d2064f415cd3099cb807e1576\": rpc error: code = NotFound desc = could not find container \"b61d1b9a4106ead5dc1acf913afe88644efe386d2064f415cd3099cb807e1576\": container with ID starting with b61d1b9a4106ead5dc1acf913afe88644efe386d2064f415cd3099cb807e1576 not found: ID does not exist" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.475064 4872 scope.go:117] "RemoveContainer" containerID="5196e855d417c339f648ecdadab3932dc60cba7c4e7084f896fa107c48720d40" Jan 27 06:57:09 crc kubenswrapper[4872]: E0127 06:57:09.475561 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5196e855d417c339f648ecdadab3932dc60cba7c4e7084f896fa107c48720d40\": container with ID starting with 5196e855d417c339f648ecdadab3932dc60cba7c4e7084f896fa107c48720d40 not found: ID does not exist" containerID="5196e855d417c339f648ecdadab3932dc60cba7c4e7084f896fa107c48720d40" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.475593 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5196e855d417c339f648ecdadab3932dc60cba7c4e7084f896fa107c48720d40"} err="failed to get container status \"5196e855d417c339f648ecdadab3932dc60cba7c4e7084f896fa107c48720d40\": rpc error: code = NotFound desc = could not find container \"5196e855d417c339f648ecdadab3932dc60cba7c4e7084f896fa107c48720d40\": container with ID starting with 5196e855d417c339f648ecdadab3932dc60cba7c4e7084f896fa107c48720d40 not found: ID does not exist" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.475610 4872 scope.go:117] "RemoveContainer" containerID="989b325d42f01ab40374e6cafa943a164d13c1907c18dd4e19464ab51e0788b0" Jan 27 06:57:09 crc kubenswrapper[4872]: E0127 06:57:09.475954 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"989b325d42f01ab40374e6cafa943a164d13c1907c18dd4e19464ab51e0788b0\": container with ID starting with 989b325d42f01ab40374e6cafa943a164d13c1907c18dd4e19464ab51e0788b0 not found: ID does not exist" containerID="989b325d42f01ab40374e6cafa943a164d13c1907c18dd4e19464ab51e0788b0" Jan 27 06:57:09 crc kubenswrapper[4872]: I0127 06:57:09.475984 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"989b325d42f01ab40374e6cafa943a164d13c1907c18dd4e19464ab51e0788b0"} err="failed to get container status \"989b325d42f01ab40374e6cafa943a164d13c1907c18dd4e19464ab51e0788b0\": rpc error: code = NotFound desc = could not find container \"989b325d42f01ab40374e6cafa943a164d13c1907c18dd4e19464ab51e0788b0\": container with ID starting with 989b325d42f01ab40374e6cafa943a164d13c1907c18dd4e19464ab51e0788b0 not found: ID does not exist" Jan 27 06:57:10 crc kubenswrapper[4872]: I0127 06:57:10.135708 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e29c05a4-c51b-4ef3-b341-fca8f5f68b53" path="/var/lib/kubelet/pods/e29c05a4-c51b-4ef3-b341-fca8f5f68b53/volumes" Jan 27 06:57:11 crc kubenswrapper[4872]: I0127 06:57:11.354957 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78956d596-5r2rl"] Jan 27 06:57:11 crc kubenswrapper[4872]: I0127 06:57:11.355935 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" podUID="51cb0c8c-bff2-4aed-9038-733f882dc2f7" containerName="controller-manager" containerID="cri-o://5a3396e58ccb076dbc83ce927ea16dc9cbfcfdd4979b0956fbcadde73c70891b" gracePeriod=30 Jan 27 06:57:11 crc kubenswrapper[4872]: I0127 06:57:11.379293 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw"] Jan 27 06:57:11 crc kubenswrapper[4872]: I0127 06:57:11.379525 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" podUID="1c6e9aac-6703-44ed-af38-bc1aaf478223" containerName="route-controller-manager" containerID="cri-o://bb111da8bc23b83ff7a68f0c891e47ff454879501c423c7e0cf5934851808faa" gracePeriod=30 Jan 27 06:57:11 crc kubenswrapper[4872]: I0127 06:57:11.417832 4872 generic.go:334] "Generic (PLEG): container finished" podID="cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" containerID="d29386836a42f4e557b1e09e3ec3822788522498a26690866b79eabd65fd639d" exitCode=0 Jan 27 06:57:11 crc kubenswrapper[4872]: I0127 06:57:11.417937 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvk24" event={"ID":"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1","Type":"ContainerDied","Data":"d29386836a42f4e557b1e09e3ec3822788522498a26690866b79eabd65fd639d"} Jan 27 06:57:11 crc kubenswrapper[4872]: I0127 06:57:11.423448 4872 generic.go:334] "Generic (PLEG): container finished" podID="59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" containerID="ff3c69af608b565804e0a6ede206405bd1749b1b261ee2a854032a93ce9cc43d" exitCode=0 Jan 27 06:57:11 crc kubenswrapper[4872]: I0127 06:57:11.423543 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pp8h2" event={"ID":"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b","Type":"ContainerDied","Data":"ff3c69af608b565804e0a6ede206405bd1749b1b261ee2a854032a93ce9cc43d"} Jan 27 06:57:11 crc kubenswrapper[4872]: I0127 06:57:11.437801 4872 generic.go:334] "Generic (PLEG): container finished" podID="d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" containerID="78c5ed6b5039c9a8fc737551ec70d70b01f8d2dc98a6c13e53fd079644a7731c" exitCode=0 Jan 27 06:57:11 crc kubenswrapper[4872]: I0127 06:57:11.437880 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r2pn" event={"ID":"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9","Type":"ContainerDied","Data":"78c5ed6b5039c9a8fc737551ec70d70b01f8d2dc98a6c13e53fd079644a7731c"} Jan 27 06:57:12 crc kubenswrapper[4872]: I0127 06:57:12.449090 4872 generic.go:334] "Generic (PLEG): container finished" podID="1c6e9aac-6703-44ed-af38-bc1aaf478223" containerID="bb111da8bc23b83ff7a68f0c891e47ff454879501c423c7e0cf5934851808faa" exitCode=0 Jan 27 06:57:12 crc kubenswrapper[4872]: I0127 06:57:12.449285 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" event={"ID":"1c6e9aac-6703-44ed-af38-bc1aaf478223","Type":"ContainerDied","Data":"bb111da8bc23b83ff7a68f0c891e47ff454879501c423c7e0cf5934851808faa"} Jan 27 06:57:12 crc kubenswrapper[4872]: I0127 06:57:12.451361 4872 generic.go:334] "Generic (PLEG): container finished" podID="51cb0c8c-bff2-4aed-9038-733f882dc2f7" containerID="5a3396e58ccb076dbc83ce927ea16dc9cbfcfdd4979b0956fbcadde73c70891b" exitCode=0 Jan 27 06:57:12 crc kubenswrapper[4872]: I0127 06:57:12.451385 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" event={"ID":"51cb0c8c-bff2-4aed-9038-733f882dc2f7","Type":"ContainerDied","Data":"5a3396e58ccb076dbc83ce927ea16dc9cbfcfdd4979b0956fbcadde73c70891b"} Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.458061 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" event={"ID":"51cb0c8c-bff2-4aed-9038-733f882dc2f7","Type":"ContainerDied","Data":"9e342993a92e871c906f2cf9583dd8bd94013e1e080a89d0f2d4ffa2d3aecdb6"} Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.458108 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e342993a92e871c906f2cf9583dd8bd94013e1e080a89d0f2d4ffa2d3aecdb6" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.459488 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" event={"ID":"1c6e9aac-6703-44ed-af38-bc1aaf478223","Type":"ContainerDied","Data":"87f7eb71604b7a6be548196777f01c6fa6dbda2bfc57f84469e344ebd7db1bc8"} Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.459517 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87f7eb71604b7a6be548196777f01c6fa6dbda2bfc57f84469e344ebd7db1bc8" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.480406 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.485361 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.502559 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7489f69d5-66pwf"] Jan 27 06:57:13 crc kubenswrapper[4872]: E0127 06:57:13.502869 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29c05a4-c51b-4ef3-b341-fca8f5f68b53" containerName="registry-server" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.502887 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29c05a4-c51b-4ef3-b341-fca8f5f68b53" containerName="registry-server" Jan 27 06:57:13 crc kubenswrapper[4872]: E0127 06:57:13.502902 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6e9aac-6703-44ed-af38-bc1aaf478223" containerName="route-controller-manager" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.502910 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6e9aac-6703-44ed-af38-bc1aaf478223" containerName="route-controller-manager" Jan 27 06:57:13 crc kubenswrapper[4872]: E0127 06:57:13.502921 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51cb0c8c-bff2-4aed-9038-733f882dc2f7" containerName="controller-manager" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.502931 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="51cb0c8c-bff2-4aed-9038-733f882dc2f7" containerName="controller-manager" Jan 27 06:57:13 crc kubenswrapper[4872]: E0127 06:57:13.502948 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29c05a4-c51b-4ef3-b341-fca8f5f68b53" containerName="extract-content" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.502955 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29c05a4-c51b-4ef3-b341-fca8f5f68b53" containerName="extract-content" Jan 27 06:57:13 crc kubenswrapper[4872]: E0127 06:57:13.502967 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29c05a4-c51b-4ef3-b341-fca8f5f68b53" containerName="extract-utilities" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.502974 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29c05a4-c51b-4ef3-b341-fca8f5f68b53" containerName="extract-utilities" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.503093 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6e9aac-6703-44ed-af38-bc1aaf478223" containerName="route-controller-manager" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.503111 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="51cb0c8c-bff2-4aed-9038-733f882dc2f7" containerName="controller-manager" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.503123 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29c05a4-c51b-4ef3-b341-fca8f5f68b53" containerName="registry-server" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.503573 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.513474 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7489f69d5-66pwf"] Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.651884 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-proxy-ca-bundles\") pod \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.651998 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c6e9aac-6703-44ed-af38-bc1aaf478223-client-ca\") pod \"1c6e9aac-6703-44ed-af38-bc1aaf478223\" (UID: \"1c6e9aac-6703-44ed-af38-bc1aaf478223\") " Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.652030 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-client-ca\") pod \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.652096 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51cb0c8c-bff2-4aed-9038-733f882dc2f7-serving-cert\") pod \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.652803 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "51cb0c8c-bff2-4aed-9038-733f882dc2f7" (UID: "51cb0c8c-bff2-4aed-9038-733f882dc2f7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.652803 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c6e9aac-6703-44ed-af38-bc1aaf478223-client-ca" (OuterVolumeSpecName: "client-ca") pod "1c6e9aac-6703-44ed-af38-bc1aaf478223" (UID: "1c6e9aac-6703-44ed-af38-bc1aaf478223"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.652982 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-client-ca" (OuterVolumeSpecName: "client-ca") pod "51cb0c8c-bff2-4aed-9038-733f882dc2f7" (UID: "51cb0c8c-bff2-4aed-9038-733f882dc2f7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.653111 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c6e9aac-6703-44ed-af38-bc1aaf478223-config\") pod \"1c6e9aac-6703-44ed-af38-bc1aaf478223\" (UID: \"1c6e9aac-6703-44ed-af38-bc1aaf478223\") " Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.653155 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-config\") pod \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.653224 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktnjf\" (UniqueName: \"kubernetes.io/projected/51cb0c8c-bff2-4aed-9038-733f882dc2f7-kube-api-access-ktnjf\") pod \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\" (UID: \"51cb0c8c-bff2-4aed-9038-733f882dc2f7\") " Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.653280 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6g7q\" (UniqueName: \"kubernetes.io/projected/1c6e9aac-6703-44ed-af38-bc1aaf478223-kube-api-access-v6g7q\") pod \"1c6e9aac-6703-44ed-af38-bc1aaf478223\" (UID: \"1c6e9aac-6703-44ed-af38-bc1aaf478223\") " Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.653308 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c6e9aac-6703-44ed-af38-bc1aaf478223-serving-cert\") pod \"1c6e9aac-6703-44ed-af38-bc1aaf478223\" (UID: \"1c6e9aac-6703-44ed-af38-bc1aaf478223\") " Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.653779 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-proxy-ca-bundles\") pod \"controller-manager-7489f69d5-66pwf\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.653832 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aad3f9b-a1da-4c15-a59c-3a0586036d85-serving-cert\") pod \"controller-manager-7489f69d5-66pwf\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.654788 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-client-ca\") pod \"controller-manager-7489f69d5-66pwf\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.654915 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgc4p\" (UniqueName: \"kubernetes.io/projected/0aad3f9b-a1da-4c15-a59c-3a0586036d85-kube-api-access-bgc4p\") pod \"controller-manager-7489f69d5-66pwf\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.654293 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-config" (OuterVolumeSpecName: "config") pod "51cb0c8c-bff2-4aed-9038-733f882dc2f7" (UID: "51cb0c8c-bff2-4aed-9038-733f882dc2f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.654303 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c6e9aac-6703-44ed-af38-bc1aaf478223-config" (OuterVolumeSpecName: "config") pod "1c6e9aac-6703-44ed-af38-bc1aaf478223" (UID: "1c6e9aac-6703-44ed-af38-bc1aaf478223"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.654970 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-config\") pod \"controller-manager-7489f69d5-66pwf\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.655093 4872 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.655110 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c6e9aac-6703-44ed-af38-bc1aaf478223-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.655119 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.655129 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c6e9aac-6703-44ed-af38-bc1aaf478223-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.655137 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51cb0c8c-bff2-4aed-9038-733f882dc2f7-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.658417 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6e9aac-6703-44ed-af38-bc1aaf478223-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1c6e9aac-6703-44ed-af38-bc1aaf478223" (UID: "1c6e9aac-6703-44ed-af38-bc1aaf478223"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.658947 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51cb0c8c-bff2-4aed-9038-733f882dc2f7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "51cb0c8c-bff2-4aed-9038-733f882dc2f7" (UID: "51cb0c8c-bff2-4aed-9038-733f882dc2f7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.660109 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51cb0c8c-bff2-4aed-9038-733f882dc2f7-kube-api-access-ktnjf" (OuterVolumeSpecName: "kube-api-access-ktnjf") pod "51cb0c8c-bff2-4aed-9038-733f882dc2f7" (UID: "51cb0c8c-bff2-4aed-9038-733f882dc2f7"). InnerVolumeSpecName "kube-api-access-ktnjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.660737 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6e9aac-6703-44ed-af38-bc1aaf478223-kube-api-access-v6g7q" (OuterVolumeSpecName: "kube-api-access-v6g7q") pod "1c6e9aac-6703-44ed-af38-bc1aaf478223" (UID: "1c6e9aac-6703-44ed-af38-bc1aaf478223"). InnerVolumeSpecName "kube-api-access-v6g7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.760077 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-client-ca\") pod \"controller-manager-7489f69d5-66pwf\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.760147 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgc4p\" (UniqueName: \"kubernetes.io/projected/0aad3f9b-a1da-4c15-a59c-3a0586036d85-kube-api-access-bgc4p\") pod \"controller-manager-7489f69d5-66pwf\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.760176 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-config\") pod \"controller-manager-7489f69d5-66pwf\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.760219 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-proxy-ca-bundles\") pod \"controller-manager-7489f69d5-66pwf\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.760245 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aad3f9b-a1da-4c15-a59c-3a0586036d85-serving-cert\") pod \"controller-manager-7489f69d5-66pwf\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.760295 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktnjf\" (UniqueName: \"kubernetes.io/projected/51cb0c8c-bff2-4aed-9038-733f882dc2f7-kube-api-access-ktnjf\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.760308 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6g7q\" (UniqueName: \"kubernetes.io/projected/1c6e9aac-6703-44ed-af38-bc1aaf478223-kube-api-access-v6g7q\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.760317 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c6e9aac-6703-44ed-af38-bc1aaf478223-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.760331 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51cb0c8c-bff2-4aed-9038-733f882dc2f7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.761150 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-client-ca\") pod \"controller-manager-7489f69d5-66pwf\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.762222 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-config\") pod \"controller-manager-7489f69d5-66pwf\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.763270 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-proxy-ca-bundles\") pod \"controller-manager-7489f69d5-66pwf\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.765389 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aad3f9b-a1da-4c15-a59c-3a0586036d85-serving-cert\") pod \"controller-manager-7489f69d5-66pwf\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.781709 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgc4p\" (UniqueName: \"kubernetes.io/projected/0aad3f9b-a1da-4c15-a59c-3a0586036d85-kube-api-access-bgc4p\") pod \"controller-manager-7489f69d5-66pwf\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:13 crc kubenswrapper[4872]: I0127 06:57:13.822376 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:14 crc kubenswrapper[4872]: I0127 06:57:14.339965 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7489f69d5-66pwf"] Jan 27 06:57:14 crc kubenswrapper[4872]: W0127 06:57:14.342742 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0aad3f9b_a1da_4c15_a59c_3a0586036d85.slice/crio-0a0332fd29837de0aba59c83f2ed0a3adca3b74e7ca0f87c6538c63b05fc4dc1 WatchSource:0}: Error finding container 0a0332fd29837de0aba59c83f2ed0a3adca3b74e7ca0f87c6538c63b05fc4dc1: Status 404 returned error can't find the container with id 0a0332fd29837de0aba59c83f2ed0a3adca3b74e7ca0f87c6538c63b05fc4dc1 Jan 27 06:57:14 crc kubenswrapper[4872]: I0127 06:57:14.465050 4872 generic.go:334] "Generic (PLEG): container finished" podID="7e6a557b-5fc3-46e7-9799-1b75567c2dc6" containerID="fe16246701b8d579c68b8778c3c78ca74306ff6bf867af9d042f55e00c8979ab" exitCode=0 Jan 27 06:57:14 crc kubenswrapper[4872]: I0127 06:57:14.465115 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm92d" event={"ID":"7e6a557b-5fc3-46e7-9799-1b75567c2dc6","Type":"ContainerDied","Data":"fe16246701b8d579c68b8778c3c78ca74306ff6bf867af9d042f55e00c8979ab"} Jan 27 06:57:14 crc kubenswrapper[4872]: I0127 06:57:14.468378 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pp8h2" event={"ID":"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b","Type":"ContainerStarted","Data":"4413e007d6b0c1cc1b0a644518d1bcacbd300d1cb9bce5cd54a4f908e33dd1a4"} Jan 27 06:57:14 crc kubenswrapper[4872]: I0127 06:57:14.470651 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r2pn" event={"ID":"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9","Type":"ContainerStarted","Data":"190507941f87e8fd66043293dcc43f217259a962b1eb685cc0e55717cb808d05"} Jan 27 06:57:14 crc kubenswrapper[4872]: I0127 06:57:14.471558 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" event={"ID":"0aad3f9b-a1da-4c15-a59c-3a0586036d85","Type":"ContainerStarted","Data":"0a0332fd29837de0aba59c83f2ed0a3adca3b74e7ca0f87c6538c63b05fc4dc1"} Jan 27 06:57:14 crc kubenswrapper[4872]: I0127 06:57:14.473526 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78956d596-5r2rl" Jan 27 06:57:14 crc kubenswrapper[4872]: I0127 06:57:14.474098 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvk24" event={"ID":"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1","Type":"ContainerStarted","Data":"e8e57b2055efa19a0f38e0428ed49e5556018f43419fa867cba8e89f7ecf37a4"} Jan 27 06:57:14 crc kubenswrapper[4872]: I0127 06:57:14.474425 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw" Jan 27 06:57:14 crc kubenswrapper[4872]: I0127 06:57:14.504131 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78956d596-5r2rl"] Jan 27 06:57:14 crc kubenswrapper[4872]: I0127 06:57:14.508502 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-78956d596-5r2rl"] Jan 27 06:57:14 crc kubenswrapper[4872]: I0127 06:57:14.520909 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pp8h2" podStartSLOduration=3.085551774 podStartE2EDuration="58.520895496s" podCreationTimestamp="2026-01-27 06:56:16 +0000 UTC" firstStartedPulling="2026-01-27 06:56:18.252625268 +0000 UTC m=+154.780100464" lastFinishedPulling="2026-01-27 06:57:13.68796899 +0000 UTC m=+210.215444186" observedRunningTime="2026-01-27 06:57:14.519309122 +0000 UTC m=+211.046784318" watchObservedRunningTime="2026-01-27 06:57:14.520895496 +0000 UTC m=+211.048370692" Jan 27 06:57:14 crc kubenswrapper[4872]: I0127 06:57:14.550223 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cvk24" podStartSLOduration=4.058850267 podStartE2EDuration="58.550208819s" podCreationTimestamp="2026-01-27 06:56:16 +0000 UTC" firstStartedPulling="2026-01-27 06:56:19.378172287 +0000 UTC m=+155.905647483" lastFinishedPulling="2026-01-27 06:57:13.869530839 +0000 UTC m=+210.397006035" observedRunningTime="2026-01-27 06:57:14.540625033 +0000 UTC m=+211.068100219" watchObservedRunningTime="2026-01-27 06:57:14.550208819 +0000 UTC m=+211.077684015" Jan 27 06:57:14 crc kubenswrapper[4872]: I0127 06:57:14.551010 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw"] Jan 27 06:57:14 crc kubenswrapper[4872]: I0127 06:57:14.557406 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdf57d98f-nqgbw"] Jan 27 06:57:14 crc kubenswrapper[4872]: I0127 06:57:14.571508 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8r2pn" podStartSLOduration=3.253425053 podStartE2EDuration="57.57149287s" podCreationTimestamp="2026-01-27 06:56:17 +0000 UTC" firstStartedPulling="2026-01-27 06:56:19.412041512 +0000 UTC m=+155.939516698" lastFinishedPulling="2026-01-27 06:57:13.730109319 +0000 UTC m=+210.257584515" observedRunningTime="2026-01-27 06:57:14.567090088 +0000 UTC m=+211.094565284" watchObservedRunningTime="2026-01-27 06:57:14.57149287 +0000 UTC m=+211.098968066" Jan 27 06:57:15 crc kubenswrapper[4872]: I0127 06:57:15.354187 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:57:15 crc kubenswrapper[4872]: I0127 06:57:15.479689 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" event={"ID":"0aad3f9b-a1da-4c15-a59c-3a0586036d85","Type":"ContainerStarted","Data":"1c1b95db825475dfae017724299cba97443e59158c00bc33d8c4aca17d865128"} Jan 27 06:57:15 crc kubenswrapper[4872]: I0127 06:57:15.481333 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:15 crc kubenswrapper[4872]: I0127 06:57:15.483664 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm92d" event={"ID":"7e6a557b-5fc3-46e7-9799-1b75567c2dc6","Type":"ContainerStarted","Data":"3dd2ee7ef4f8860baeee706d2c2fcf561f1dc9ebd77b5599bf40536227f4ac52"} Jan 27 06:57:15 crc kubenswrapper[4872]: I0127 06:57:15.485108 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:15 crc kubenswrapper[4872]: I0127 06:57:15.507480 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" podStartSLOduration=4.507460706 podStartE2EDuration="4.507460706s" podCreationTimestamp="2026-01-27 06:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:57:15.504931365 +0000 UTC m=+212.032406561" watchObservedRunningTime="2026-01-27 06:57:15.507460706 +0000 UTC m=+212.034935902" Jan 27 06:57:15 crc kubenswrapper[4872]: I0127 06:57:15.522284 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mm92d" podStartSLOduration=3.550293915 podStartE2EDuration="1m1.522270256s" podCreationTimestamp="2026-01-27 06:56:14 +0000 UTC" firstStartedPulling="2026-01-27 06:56:17.174817295 +0000 UTC m=+153.702292491" lastFinishedPulling="2026-01-27 06:57:15.146793636 +0000 UTC m=+211.674268832" observedRunningTime="2026-01-27 06:57:15.519079438 +0000 UTC m=+212.046554644" watchObservedRunningTime="2026-01-27 06:57:15.522270256 +0000 UTC m=+212.049745452" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.103725 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c6e9aac-6703-44ed-af38-bc1aaf478223" path="/var/lib/kubelet/pods/1c6e9aac-6703-44ed-af38-bc1aaf478223/volumes" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.104454 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51cb0c8c-bff2-4aed-9038-733f882dc2f7" path="/var/lib/kubelet/pods/51cb0c8c-bff2-4aed-9038-733f882dc2f7/volumes" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.463808 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r"] Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.464873 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.466553 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.466773 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.466912 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.467969 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.468096 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.477521 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.513909 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r"] Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.579226 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.580131 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.593282 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3d50439-48ba-4960-9502-f093035ded22-config\") pod \"route-controller-manager-59f7c674b7-w4g4r\" (UID: \"b3d50439-48ba-4960-9502-f093035ded22\") " pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.593323 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3d50439-48ba-4960-9502-f093035ded22-client-ca\") pod \"route-controller-manager-59f7c674b7-w4g4r\" (UID: \"b3d50439-48ba-4960-9502-f093035ded22\") " pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.593376 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3d50439-48ba-4960-9502-f093035ded22-serving-cert\") pod \"route-controller-manager-59f7c674b7-w4g4r\" (UID: \"b3d50439-48ba-4960-9502-f093035ded22\") " pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.593407 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5fhp\" (UniqueName: \"kubernetes.io/projected/b3d50439-48ba-4960-9502-f093035ded22-kube-api-access-f5fhp\") pod \"route-controller-manager-59f7c674b7-w4g4r\" (UID: \"b3d50439-48ba-4960-9502-f093035ded22\") " pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.631029 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.694832 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3d50439-48ba-4960-9502-f093035ded22-config\") pod \"route-controller-manager-59f7c674b7-w4g4r\" (UID: \"b3d50439-48ba-4960-9502-f093035ded22\") " pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.694907 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3d50439-48ba-4960-9502-f093035ded22-client-ca\") pod \"route-controller-manager-59f7c674b7-w4g4r\" (UID: \"b3d50439-48ba-4960-9502-f093035ded22\") " pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.694988 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3d50439-48ba-4960-9502-f093035ded22-serving-cert\") pod \"route-controller-manager-59f7c674b7-w4g4r\" (UID: \"b3d50439-48ba-4960-9502-f093035ded22\") " pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.695881 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3d50439-48ba-4960-9502-f093035ded22-client-ca\") pod \"route-controller-manager-59f7c674b7-w4g4r\" (UID: \"b3d50439-48ba-4960-9502-f093035ded22\") " pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.695023 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5fhp\" (UniqueName: \"kubernetes.io/projected/b3d50439-48ba-4960-9502-f093035ded22-kube-api-access-f5fhp\") pod \"route-controller-manager-59f7c674b7-w4g4r\" (UID: \"b3d50439-48ba-4960-9502-f093035ded22\") " pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.696172 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3d50439-48ba-4960-9502-f093035ded22-config\") pod \"route-controller-manager-59f7c674b7-w4g4r\" (UID: \"b3d50439-48ba-4960-9502-f093035ded22\") " pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.706908 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3d50439-48ba-4960-9502-f093035ded22-serving-cert\") pod \"route-controller-manager-59f7c674b7-w4g4r\" (UID: \"b3d50439-48ba-4960-9502-f093035ded22\") " pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.721642 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5fhp\" (UniqueName: \"kubernetes.io/projected/b3d50439-48ba-4960-9502-f093035ded22-kube-api-access-f5fhp\") pod \"route-controller-manager-59f7c674b7-w4g4r\" (UID: \"b3d50439-48ba-4960-9502-f093035ded22\") " pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.780276 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.984358 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:57:16 crc kubenswrapper[4872]: I0127 06:57:16.984806 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:57:17 crc kubenswrapper[4872]: I0127 06:57:17.051284 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:57:17 crc kubenswrapper[4872]: I0127 06:57:17.073659 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r"] Jan 27 06:57:17 crc kubenswrapper[4872]: I0127 06:57:17.494982 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" event={"ID":"b3d50439-48ba-4960-9502-f093035ded22","Type":"ContainerStarted","Data":"b57fb1cdfe7b2743af9b31a4f49d28b35f7a271000db670069cce02f813e46d5"} Jan 27 06:57:17 crc kubenswrapper[4872]: I0127 06:57:17.495029 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" event={"ID":"b3d50439-48ba-4960-9502-f093035ded22","Type":"ContainerStarted","Data":"9d9d6fed39e590de18e5999d25ac431316ea9101c5fa7e9a5feeba8af466dc7b"} Jan 27 06:57:17 crc kubenswrapper[4872]: I0127 06:57:17.511572 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" podStartSLOduration=6.511555004 podStartE2EDuration="6.511555004s" podCreationTimestamp="2026-01-27 06:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:57:17.509752594 +0000 UTC m=+214.037227790" watchObservedRunningTime="2026-01-27 06:57:17.511555004 +0000 UTC m=+214.039030200" Jan 27 06:57:18 crc kubenswrapper[4872]: I0127 06:57:17.999834 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:57:18 crc kubenswrapper[4872]: I0127 06:57:18.000267 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:57:18 crc kubenswrapper[4872]: I0127 06:57:18.092287 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hthrz"] Jan 27 06:57:18 crc kubenswrapper[4872]: I0127 06:57:18.422489 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:57:18 crc kubenswrapper[4872]: I0127 06:57:18.449271 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mm92d"] Jan 27 06:57:18 crc kubenswrapper[4872]: I0127 06:57:18.449502 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mm92d" podUID="7e6a557b-5fc3-46e7-9799-1b75567c2dc6" containerName="registry-server" containerID="cri-o://3dd2ee7ef4f8860baeee706d2c2fcf561f1dc9ebd77b5599bf40536227f4ac52" gracePeriod=2 Jan 27 06:57:18 crc kubenswrapper[4872]: I0127 06:57:18.464147 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:57:18 crc kubenswrapper[4872]: I0127 06:57:18.501976 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:18 crc kubenswrapper[4872]: I0127 06:57:18.511898 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:18 crc kubenswrapper[4872]: I0127 06:57:18.548455 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:57:18 crc kubenswrapper[4872]: I0127 06:57:18.885759 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm92d" Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.038654 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8r2pn" podUID="d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" containerName="registry-server" probeResult="failure" output=< Jan 27 06:57:19 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Jan 27 06:57:19 crc kubenswrapper[4872]: > Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.051442 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-catalog-content\") pod \"7e6a557b-5fc3-46e7-9799-1b75567c2dc6\" (UID: \"7e6a557b-5fc3-46e7-9799-1b75567c2dc6\") " Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.051499 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fpw2\" (UniqueName: \"kubernetes.io/projected/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-kube-api-access-8fpw2\") pod \"7e6a557b-5fc3-46e7-9799-1b75567c2dc6\" (UID: \"7e6a557b-5fc3-46e7-9799-1b75567c2dc6\") " Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.051575 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-utilities\") pod \"7e6a557b-5fc3-46e7-9799-1b75567c2dc6\" (UID: \"7e6a557b-5fc3-46e7-9799-1b75567c2dc6\") " Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.052439 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-utilities" (OuterVolumeSpecName: "utilities") pod "7e6a557b-5fc3-46e7-9799-1b75567c2dc6" (UID: "7e6a557b-5fc3-46e7-9799-1b75567c2dc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.058360 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-kube-api-access-8fpw2" (OuterVolumeSpecName: "kube-api-access-8fpw2") pod "7e6a557b-5fc3-46e7-9799-1b75567c2dc6" (UID: "7e6a557b-5fc3-46e7-9799-1b75567c2dc6"). InnerVolumeSpecName "kube-api-access-8fpw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.109316 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e6a557b-5fc3-46e7-9799-1b75567c2dc6" (UID: "7e6a557b-5fc3-46e7-9799-1b75567c2dc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.152889 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.152933 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.152948 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fpw2\" (UniqueName: \"kubernetes.io/projected/7e6a557b-5fc3-46e7-9799-1b75567c2dc6-kube-api-access-8fpw2\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.507559 4872 generic.go:334] "Generic (PLEG): container finished" podID="7e6a557b-5fc3-46e7-9799-1b75567c2dc6" containerID="3dd2ee7ef4f8860baeee706d2c2fcf561f1dc9ebd77b5599bf40536227f4ac52" exitCode=0 Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.508340 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm92d" Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.509959 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm92d" event={"ID":"7e6a557b-5fc3-46e7-9799-1b75567c2dc6","Type":"ContainerDied","Data":"3dd2ee7ef4f8860baeee706d2c2fcf561f1dc9ebd77b5599bf40536227f4ac52"} Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.510004 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm92d" event={"ID":"7e6a557b-5fc3-46e7-9799-1b75567c2dc6","Type":"ContainerDied","Data":"be8e41d8e961be6c1fe6247b1394cb087cafdd280c8eece962eac81c3dba3e0a"} Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.510027 4872 scope.go:117] "RemoveContainer" containerID="3dd2ee7ef4f8860baeee706d2c2fcf561f1dc9ebd77b5599bf40536227f4ac52" Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.524486 4872 scope.go:117] "RemoveContainer" containerID="fe16246701b8d579c68b8778c3c78ca74306ff6bf867af9d042f55e00c8979ab" Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.543209 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mm92d"] Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.545770 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mm92d"] Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.562038 4872 scope.go:117] "RemoveContainer" containerID="3ccf7957f37f10bfdc689568a034d90cb3db8648df22e5e971c39d824550ca90" Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.577177 4872 scope.go:117] "RemoveContainer" containerID="3dd2ee7ef4f8860baeee706d2c2fcf561f1dc9ebd77b5599bf40536227f4ac52" Jan 27 06:57:19 crc kubenswrapper[4872]: E0127 06:57:19.577658 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd2ee7ef4f8860baeee706d2c2fcf561f1dc9ebd77b5599bf40536227f4ac52\": container with ID starting with 3dd2ee7ef4f8860baeee706d2c2fcf561f1dc9ebd77b5599bf40536227f4ac52 not found: ID does not exist" containerID="3dd2ee7ef4f8860baeee706d2c2fcf561f1dc9ebd77b5599bf40536227f4ac52" Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.577695 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd2ee7ef4f8860baeee706d2c2fcf561f1dc9ebd77b5599bf40536227f4ac52"} err="failed to get container status \"3dd2ee7ef4f8860baeee706d2c2fcf561f1dc9ebd77b5599bf40536227f4ac52\": rpc error: code = NotFound desc = could not find container \"3dd2ee7ef4f8860baeee706d2c2fcf561f1dc9ebd77b5599bf40536227f4ac52\": container with ID starting with 3dd2ee7ef4f8860baeee706d2c2fcf561f1dc9ebd77b5599bf40536227f4ac52 not found: ID does not exist" Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.577721 4872 scope.go:117] "RemoveContainer" containerID="fe16246701b8d579c68b8778c3c78ca74306ff6bf867af9d042f55e00c8979ab" Jan 27 06:57:19 crc kubenswrapper[4872]: E0127 06:57:19.578015 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe16246701b8d579c68b8778c3c78ca74306ff6bf867af9d042f55e00c8979ab\": container with ID starting with fe16246701b8d579c68b8778c3c78ca74306ff6bf867af9d042f55e00c8979ab not found: ID does not exist" containerID="fe16246701b8d579c68b8778c3c78ca74306ff6bf867af9d042f55e00c8979ab" Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.578059 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe16246701b8d579c68b8778c3c78ca74306ff6bf867af9d042f55e00c8979ab"} err="failed to get container status \"fe16246701b8d579c68b8778c3c78ca74306ff6bf867af9d042f55e00c8979ab\": rpc error: code = NotFound desc = could not find container \"fe16246701b8d579c68b8778c3c78ca74306ff6bf867af9d042f55e00c8979ab\": container with ID starting with fe16246701b8d579c68b8778c3c78ca74306ff6bf867af9d042f55e00c8979ab not found: ID does not exist" Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.578074 4872 scope.go:117] "RemoveContainer" containerID="3ccf7957f37f10bfdc689568a034d90cb3db8648df22e5e971c39d824550ca90" Jan 27 06:57:19 crc kubenswrapper[4872]: E0127 06:57:19.578315 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ccf7957f37f10bfdc689568a034d90cb3db8648df22e5e971c39d824550ca90\": container with ID starting with 3ccf7957f37f10bfdc689568a034d90cb3db8648df22e5e971c39d824550ca90 not found: ID does not exist" containerID="3ccf7957f37f10bfdc689568a034d90cb3db8648df22e5e971c39d824550ca90" Jan 27 06:57:19 crc kubenswrapper[4872]: I0127 06:57:19.578356 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ccf7957f37f10bfdc689568a034d90cb3db8648df22e5e971c39d824550ca90"} err="failed to get container status \"3ccf7957f37f10bfdc689568a034d90cb3db8648df22e5e971c39d824550ca90\": rpc error: code = NotFound desc = could not find container \"3ccf7957f37f10bfdc689568a034d90cb3db8648df22e5e971c39d824550ca90\": container with ID starting with 3ccf7957f37f10bfdc689568a034d90cb3db8648df22e5e971c39d824550ca90 not found: ID does not exist" Jan 27 06:57:20 crc kubenswrapper[4872]: I0127 06:57:20.105530 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e6a557b-5fc3-46e7-9799-1b75567c2dc6" path="/var/lib/kubelet/pods/7e6a557b-5fc3-46e7-9799-1b75567c2dc6/volumes" Jan 27 06:57:20 crc kubenswrapper[4872]: I0127 06:57:20.847382 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tltfz"] Jan 27 06:57:20 crc kubenswrapper[4872]: I0127 06:57:20.847619 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tltfz" podUID="02c45bcc-19a3-41d7-b7dc-b954a7f2301a" containerName="registry-server" containerID="cri-o://2ac1399da8c9c2f1577788f12e8a930e5176a63a1a1a01d1289636366f235beb" gracePeriod=2 Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.275228 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.325058 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvx77\" (UniqueName: \"kubernetes.io/projected/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-kube-api-access-zvx77\") pod \"02c45bcc-19a3-41d7-b7dc-b954a7f2301a\" (UID: \"02c45bcc-19a3-41d7-b7dc-b954a7f2301a\") " Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.325219 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-catalog-content\") pod \"02c45bcc-19a3-41d7-b7dc-b954a7f2301a\" (UID: \"02c45bcc-19a3-41d7-b7dc-b954a7f2301a\") " Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.325263 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-utilities\") pod \"02c45bcc-19a3-41d7-b7dc-b954a7f2301a\" (UID: \"02c45bcc-19a3-41d7-b7dc-b954a7f2301a\") " Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.326413 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-utilities" (OuterVolumeSpecName: "utilities") pod "02c45bcc-19a3-41d7-b7dc-b954a7f2301a" (UID: "02c45bcc-19a3-41d7-b7dc-b954a7f2301a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.328425 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-kube-api-access-zvx77" (OuterVolumeSpecName: "kube-api-access-zvx77") pod "02c45bcc-19a3-41d7-b7dc-b954a7f2301a" (UID: "02c45bcc-19a3-41d7-b7dc-b954a7f2301a"). InnerVolumeSpecName "kube-api-access-zvx77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.426797 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.426876 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvx77\" (UniqueName: \"kubernetes.io/projected/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-kube-api-access-zvx77\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.456586 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02c45bcc-19a3-41d7-b7dc-b954a7f2301a" (UID: "02c45bcc-19a3-41d7-b7dc-b954a7f2301a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.535900 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c45bcc-19a3-41d7-b7dc-b954a7f2301a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.541168 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tltfz" event={"ID":"02c45bcc-19a3-41d7-b7dc-b954a7f2301a","Type":"ContainerDied","Data":"2ac1399da8c9c2f1577788f12e8a930e5176a63a1a1a01d1289636366f235beb"} Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.540938 4872 generic.go:334] "Generic (PLEG): container finished" podID="02c45bcc-19a3-41d7-b7dc-b954a7f2301a" containerID="2ac1399da8c9c2f1577788f12e8a930e5176a63a1a1a01d1289636366f235beb" exitCode=0 Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.541231 4872 scope.go:117] "RemoveContainer" containerID="2ac1399da8c9c2f1577788f12e8a930e5176a63a1a1a01d1289636366f235beb" Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.541286 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tltfz" event={"ID":"02c45bcc-19a3-41d7-b7dc-b954a7f2301a","Type":"ContainerDied","Data":"e66fe3ab72d2a368cc7251e78c99ca5eee1ef7ad94963a5a1e0ce27f7b6b80a3"} Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.542038 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tltfz" Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.563586 4872 scope.go:117] "RemoveContainer" containerID="e4095c82259b4b562ca8960f969376cbb0841250e0803b371b1cb873922d2c0a" Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.576649 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tltfz"] Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.581635 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tltfz"] Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.595458 4872 scope.go:117] "RemoveContainer" containerID="ade90521d44f60555e21d0b600792104e4c5db2d1aae1877d3376cac7367c709" Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.617306 4872 scope.go:117] "RemoveContainer" containerID="2ac1399da8c9c2f1577788f12e8a930e5176a63a1a1a01d1289636366f235beb" Jan 27 06:57:21 crc kubenswrapper[4872]: E0127 06:57:21.617691 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ac1399da8c9c2f1577788f12e8a930e5176a63a1a1a01d1289636366f235beb\": container with ID starting with 2ac1399da8c9c2f1577788f12e8a930e5176a63a1a1a01d1289636366f235beb not found: ID does not exist" containerID="2ac1399da8c9c2f1577788f12e8a930e5176a63a1a1a01d1289636366f235beb" Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.617735 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ac1399da8c9c2f1577788f12e8a930e5176a63a1a1a01d1289636366f235beb"} err="failed to get container status \"2ac1399da8c9c2f1577788f12e8a930e5176a63a1a1a01d1289636366f235beb\": rpc error: code = NotFound desc = could not find container \"2ac1399da8c9c2f1577788f12e8a930e5176a63a1a1a01d1289636366f235beb\": container with ID starting with 2ac1399da8c9c2f1577788f12e8a930e5176a63a1a1a01d1289636366f235beb not found: ID does not exist" Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.617768 4872 scope.go:117] "RemoveContainer" containerID="e4095c82259b4b562ca8960f969376cbb0841250e0803b371b1cb873922d2c0a" Jan 27 06:57:21 crc kubenswrapper[4872]: E0127 06:57:21.618120 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4095c82259b4b562ca8960f969376cbb0841250e0803b371b1cb873922d2c0a\": container with ID starting with e4095c82259b4b562ca8960f969376cbb0841250e0803b371b1cb873922d2c0a not found: ID does not exist" containerID="e4095c82259b4b562ca8960f969376cbb0841250e0803b371b1cb873922d2c0a" Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.618159 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4095c82259b4b562ca8960f969376cbb0841250e0803b371b1cb873922d2c0a"} err="failed to get container status \"e4095c82259b4b562ca8960f969376cbb0841250e0803b371b1cb873922d2c0a\": rpc error: code = NotFound desc = could not find container \"e4095c82259b4b562ca8960f969376cbb0841250e0803b371b1cb873922d2c0a\": container with ID starting with e4095c82259b4b562ca8960f969376cbb0841250e0803b371b1cb873922d2c0a not found: ID does not exist" Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.618175 4872 scope.go:117] "RemoveContainer" containerID="ade90521d44f60555e21d0b600792104e4c5db2d1aae1877d3376cac7367c709" Jan 27 06:57:21 crc kubenswrapper[4872]: E0127 06:57:21.618422 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ade90521d44f60555e21d0b600792104e4c5db2d1aae1877d3376cac7367c709\": container with ID starting with ade90521d44f60555e21d0b600792104e4c5db2d1aae1877d3376cac7367c709 not found: ID does not exist" containerID="ade90521d44f60555e21d0b600792104e4c5db2d1aae1877d3376cac7367c709" Jan 27 06:57:21 crc kubenswrapper[4872]: I0127 06:57:21.618450 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade90521d44f60555e21d0b600792104e4c5db2d1aae1877d3376cac7367c709"} err="failed to get container status \"ade90521d44f60555e21d0b600792104e4c5db2d1aae1877d3376cac7367c709\": rpc error: code = NotFound desc = could not find container \"ade90521d44f60555e21d0b600792104e4c5db2d1aae1877d3376cac7367c709\": container with ID starting with ade90521d44f60555e21d0b600792104e4c5db2d1aae1877d3376cac7367c709 not found: ID does not exist" Jan 27 06:57:22 crc kubenswrapper[4872]: I0127 06:57:22.109886 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c45bcc-19a3-41d7-b7dc-b954a7f2301a" path="/var/lib/kubelet/pods/02c45bcc-19a3-41d7-b7dc-b954a7f2301a/volumes" Jan 27 06:57:24 crc kubenswrapper[4872]: I0127 06:57:24.530079 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vvsln"] Jan 27 06:57:25 crc kubenswrapper[4872]: I0127 06:57:25.001560 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:57:25 crc kubenswrapper[4872]: I0127 06:57:25.001623 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:57:25 crc kubenswrapper[4872]: I0127 06:57:25.001670 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 06:57:25 crc kubenswrapper[4872]: I0127 06:57:25.002322 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6"} pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 06:57:25 crc kubenswrapper[4872]: I0127 06:57:25.002388 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" containerID="cri-o://21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6" gracePeriod=600 Jan 27 06:57:25 crc kubenswrapper[4872]: I0127 06:57:25.561928 4872 generic.go:334] "Generic (PLEG): container finished" podID="5ea42312-a362-48cd-8387-34c060df18a1" containerID="21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6" exitCode=0 Jan 27 06:57:25 crc kubenswrapper[4872]: I0127 06:57:25.562168 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerDied","Data":"21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6"} Jan 27 06:57:25 crc kubenswrapper[4872]: I0127 06:57:25.562191 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerStarted","Data":"80e90793575fa0faed445037a9d207d5dd942489472202844d524e334d89cc5a"} Jan 27 06:57:27 crc kubenswrapper[4872]: I0127 06:57:27.041130 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:57:28 crc kubenswrapper[4872]: I0127 06:57:28.040487 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:57:28 crc kubenswrapper[4872]: I0127 06:57:28.081254 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:57:28 crc kubenswrapper[4872]: I0127 06:57:28.246890 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvk24"] Jan 27 06:57:28 crc kubenswrapper[4872]: I0127 06:57:28.247139 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cvk24" podUID="cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" containerName="registry-server" containerID="cri-o://e8e57b2055efa19a0f38e0428ed49e5556018f43419fa867cba8e89f7ecf37a4" gracePeriod=2 Jan 27 06:57:28 crc kubenswrapper[4872]: I0127 06:57:28.577246 4872 generic.go:334] "Generic (PLEG): container finished" podID="cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" containerID="e8e57b2055efa19a0f38e0428ed49e5556018f43419fa867cba8e89f7ecf37a4" exitCode=0 Jan 27 06:57:28 crc kubenswrapper[4872]: I0127 06:57:28.577308 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvk24" event={"ID":"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1","Type":"ContainerDied","Data":"e8e57b2055efa19a0f38e0428ed49e5556018f43419fa867cba8e89f7ecf37a4"} Jan 27 06:57:28 crc kubenswrapper[4872]: I0127 06:57:28.675237 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:57:28 crc kubenswrapper[4872]: I0127 06:57:28.817009 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-utilities\") pod \"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1\" (UID: \"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1\") " Jan 27 06:57:28 crc kubenswrapper[4872]: I0127 06:57:28.817190 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-catalog-content\") pod \"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1\" (UID: \"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1\") " Jan 27 06:57:28 crc kubenswrapper[4872]: I0127 06:57:28.817219 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kgp6\" (UniqueName: \"kubernetes.io/projected/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-kube-api-access-2kgp6\") pod \"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1\" (UID: \"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1\") " Jan 27 06:57:28 crc kubenswrapper[4872]: I0127 06:57:28.817899 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-utilities" (OuterVolumeSpecName: "utilities") pod "cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" (UID: "cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:57:28 crc kubenswrapper[4872]: I0127 06:57:28.822825 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-kube-api-access-2kgp6" (OuterVolumeSpecName: "kube-api-access-2kgp6") pod "cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" (UID: "cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1"). InnerVolumeSpecName "kube-api-access-2kgp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:57:28 crc kubenswrapper[4872]: I0127 06:57:28.838005 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" (UID: "cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:57:28 crc kubenswrapper[4872]: I0127 06:57:28.918504 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:28 crc kubenswrapper[4872]: I0127 06:57:28.918544 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kgp6\" (UniqueName: \"kubernetes.io/projected/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-kube-api-access-2kgp6\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:28 crc kubenswrapper[4872]: I0127 06:57:28.918556 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:29 crc kubenswrapper[4872]: I0127 06:57:29.584472 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cvk24" event={"ID":"cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1","Type":"ContainerDied","Data":"85fde09e42885abc835568bae0b47c9ef600217534d1c56b2261e6e5a17791ee"} Jan 27 06:57:29 crc kubenswrapper[4872]: I0127 06:57:29.584520 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cvk24" Jan 27 06:57:29 crc kubenswrapper[4872]: I0127 06:57:29.585568 4872 scope.go:117] "RemoveContainer" containerID="e8e57b2055efa19a0f38e0428ed49e5556018f43419fa867cba8e89f7ecf37a4" Jan 27 06:57:29 crc kubenswrapper[4872]: I0127 06:57:29.601456 4872 scope.go:117] "RemoveContainer" containerID="d29386836a42f4e557b1e09e3ec3822788522498a26690866b79eabd65fd639d" Jan 27 06:57:29 crc kubenswrapper[4872]: I0127 06:57:29.612999 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvk24"] Jan 27 06:57:29 crc kubenswrapper[4872]: I0127 06:57:29.616830 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cvk24"] Jan 27 06:57:29 crc kubenswrapper[4872]: I0127 06:57:29.629026 4872 scope.go:117] "RemoveContainer" containerID="64b83408cdd6d78bccd848f3e78439f78f2230c0d8f94b6b5e33b5caf3790085" Jan 27 06:57:30 crc kubenswrapper[4872]: I0127 06:57:30.107648 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" path="/var/lib/kubelet/pods/cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1/volumes" Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.339397 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7489f69d5-66pwf"] Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.339600 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" podUID="0aad3f9b-a1da-4c15-a59c-3a0586036d85" containerName="controller-manager" containerID="cri-o://1c1b95db825475dfae017724299cba97443e59158c00bc33d8c4aca17d865128" gracePeriod=30 Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.443051 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r"] Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.443491 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" podUID="b3d50439-48ba-4960-9502-f093035ded22" containerName="route-controller-manager" containerID="cri-o://b57fb1cdfe7b2743af9b31a4f49d28b35f7a271000db670069cce02f813e46d5" gracePeriod=30 Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.595253 4872 generic.go:334] "Generic (PLEG): container finished" podID="0aad3f9b-a1da-4c15-a59c-3a0586036d85" containerID="1c1b95db825475dfae017724299cba97443e59158c00bc33d8c4aca17d865128" exitCode=0 Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.595312 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" event={"ID":"0aad3f9b-a1da-4c15-a59c-3a0586036d85","Type":"ContainerDied","Data":"1c1b95db825475dfae017724299cba97443e59158c00bc33d8c4aca17d865128"} Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.596942 4872 generic.go:334] "Generic (PLEG): container finished" podID="b3d50439-48ba-4960-9502-f093035ded22" containerID="b57fb1cdfe7b2743af9b31a4f49d28b35f7a271000db670069cce02f813e46d5" exitCode=0 Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.596978 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" event={"ID":"b3d50439-48ba-4960-9502-f093035ded22","Type":"ContainerDied","Data":"b57fb1cdfe7b2743af9b31a4f49d28b35f7a271000db670069cce02f813e46d5"} Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.929721 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.988012 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-config\") pod \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.988071 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-proxy-ca-bundles\") pod \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.988091 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aad3f9b-a1da-4c15-a59c-3a0586036d85-serving-cert\") pod \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.988110 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgc4p\" (UniqueName: \"kubernetes.io/projected/0aad3f9b-a1da-4c15-a59c-3a0586036d85-kube-api-access-bgc4p\") pod \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.988145 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-client-ca\") pod \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\" (UID: \"0aad3f9b-a1da-4c15-a59c-3a0586036d85\") " Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.988885 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-client-ca" (OuterVolumeSpecName: "client-ca") pod "0aad3f9b-a1da-4c15-a59c-3a0586036d85" (UID: "0aad3f9b-a1da-4c15-a59c-3a0586036d85"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.989045 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-config" (OuterVolumeSpecName: "config") pod "0aad3f9b-a1da-4c15-a59c-3a0586036d85" (UID: "0aad3f9b-a1da-4c15-a59c-3a0586036d85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:57:31 crc kubenswrapper[4872]: I0127 06:57:31.989762 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0aad3f9b-a1da-4c15-a59c-3a0586036d85" (UID: "0aad3f9b-a1da-4c15-a59c-3a0586036d85"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.000024 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aad3f9b-a1da-4c15-a59c-3a0586036d85-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0aad3f9b-a1da-4c15-a59c-3a0586036d85" (UID: "0aad3f9b-a1da-4c15-a59c-3a0586036d85"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.001537 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aad3f9b-a1da-4c15-a59c-3a0586036d85-kube-api-access-bgc4p" (OuterVolumeSpecName: "kube-api-access-bgc4p") pod "0aad3f9b-a1da-4c15-a59c-3a0586036d85" (UID: "0aad3f9b-a1da-4c15-a59c-3a0586036d85"). InnerVolumeSpecName "kube-api-access-bgc4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.089026 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.089060 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.089071 4872 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0aad3f9b-a1da-4c15-a59c-3a0586036d85-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.089082 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aad3f9b-a1da-4c15-a59c-3a0586036d85-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.089093 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgc4p\" (UniqueName: \"kubernetes.io/projected/0aad3f9b-a1da-4c15-a59c-3a0586036d85-kube-api-access-bgc4p\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.311372 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.492683 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3d50439-48ba-4960-9502-f093035ded22-config\") pod \"b3d50439-48ba-4960-9502-f093035ded22\" (UID: \"b3d50439-48ba-4960-9502-f093035ded22\") " Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.493648 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3d50439-48ba-4960-9502-f093035ded22-config" (OuterVolumeSpecName: "config") pod "b3d50439-48ba-4960-9502-f093035ded22" (UID: "b3d50439-48ba-4960-9502-f093035ded22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.493806 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3d50439-48ba-4960-9502-f093035ded22-serving-cert\") pod \"b3d50439-48ba-4960-9502-f093035ded22\" (UID: \"b3d50439-48ba-4960-9502-f093035ded22\") " Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.493867 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3d50439-48ba-4960-9502-f093035ded22-client-ca\") pod \"b3d50439-48ba-4960-9502-f093035ded22\" (UID: \"b3d50439-48ba-4960-9502-f093035ded22\") " Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.493929 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5fhp\" (UniqueName: \"kubernetes.io/projected/b3d50439-48ba-4960-9502-f093035ded22-kube-api-access-f5fhp\") pod \"b3d50439-48ba-4960-9502-f093035ded22\" (UID: \"b3d50439-48ba-4960-9502-f093035ded22\") " Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.494244 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3d50439-48ba-4960-9502-f093035ded22-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.495313 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3d50439-48ba-4960-9502-f093035ded22-client-ca" (OuterVolumeSpecName: "client-ca") pod "b3d50439-48ba-4960-9502-f093035ded22" (UID: "b3d50439-48ba-4960-9502-f093035ded22"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.496998 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d50439-48ba-4960-9502-f093035ded22-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b3d50439-48ba-4960-9502-f093035ded22" (UID: "b3d50439-48ba-4960-9502-f093035ded22"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.502837 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67f868f89c-chhdk"] Jan 27 06:57:32 crc kubenswrapper[4872]: E0127 06:57:32.505648 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aad3f9b-a1da-4c15-a59c-3a0586036d85" containerName="controller-manager" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.505674 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aad3f9b-a1da-4c15-a59c-3a0586036d85" containerName="controller-manager" Jan 27 06:57:32 crc kubenswrapper[4872]: E0127 06:57:32.505689 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c45bcc-19a3-41d7-b7dc-b954a7f2301a" containerName="extract-utilities" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.505697 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c45bcc-19a3-41d7-b7dc-b954a7f2301a" containerName="extract-utilities" Jan 27 06:57:32 crc kubenswrapper[4872]: E0127 06:57:32.505709 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6a557b-5fc3-46e7-9799-1b75567c2dc6" containerName="extract-utilities" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.505715 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6a557b-5fc3-46e7-9799-1b75567c2dc6" containerName="extract-utilities" Jan 27 06:57:32 crc kubenswrapper[4872]: E0127 06:57:32.505724 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c45bcc-19a3-41d7-b7dc-b954a7f2301a" containerName="registry-server" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.505729 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c45bcc-19a3-41d7-b7dc-b954a7f2301a" containerName="registry-server" Jan 27 06:57:32 crc kubenswrapper[4872]: E0127 06:57:32.505737 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" containerName="extract-utilities" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.505742 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" containerName="extract-utilities" Jan 27 06:57:32 crc kubenswrapper[4872]: E0127 06:57:32.505751 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" containerName="registry-server" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.505756 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" containerName="registry-server" Jan 27 06:57:32 crc kubenswrapper[4872]: E0127 06:57:32.505766 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" containerName="extract-content" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.505771 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" containerName="extract-content" Jan 27 06:57:32 crc kubenswrapper[4872]: E0127 06:57:32.505780 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6a557b-5fc3-46e7-9799-1b75567c2dc6" containerName="registry-server" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.505786 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6a557b-5fc3-46e7-9799-1b75567c2dc6" containerName="registry-server" Jan 27 06:57:32 crc kubenswrapper[4872]: E0127 06:57:32.505794 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d50439-48ba-4960-9502-f093035ded22" containerName="route-controller-manager" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.505800 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d50439-48ba-4960-9502-f093035ded22" containerName="route-controller-manager" Jan 27 06:57:32 crc kubenswrapper[4872]: E0127 06:57:32.505812 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c45bcc-19a3-41d7-b7dc-b954a7f2301a" containerName="extract-content" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.505817 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c45bcc-19a3-41d7-b7dc-b954a7f2301a" containerName="extract-content" Jan 27 06:57:32 crc kubenswrapper[4872]: E0127 06:57:32.505826 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6a557b-5fc3-46e7-9799-1b75567c2dc6" containerName="extract-content" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.505832 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6a557b-5fc3-46e7-9799-1b75567c2dc6" containerName="extract-content" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.505934 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbce5e17-ac6e-43a9-b73a-9a6a50fee3e1" containerName="registry-server" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.505948 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aad3f9b-a1da-4c15-a59c-3a0586036d85" containerName="controller-manager" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.505954 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c45bcc-19a3-41d7-b7dc-b954a7f2301a" containerName="registry-server" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.505964 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d50439-48ba-4960-9502-f093035ded22" containerName="route-controller-manager" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.505976 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e6a557b-5fc3-46e7-9799-1b75567c2dc6" containerName="registry-server" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.506277 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm"] Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.506331 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d50439-48ba-4960-9502-f093035ded22-kube-api-access-f5fhp" (OuterVolumeSpecName: "kube-api-access-f5fhp") pod "b3d50439-48ba-4960-9502-f093035ded22" (UID: "b3d50439-48ba-4960-9502-f093035ded22"). InnerVolumeSpecName "kube-api-access-f5fhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.506373 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.514206 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.521904 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67f868f89c-chhdk"] Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.522703 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm"] Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.594945 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88ed22ee-b6a2-40a0-b1e0-7a1589b987ec-serving-cert\") pod \"route-controller-manager-58d6bbdd59-x8fgm\" (UID: \"88ed22ee-b6a2-40a0-b1e0-7a1589b987ec\") " pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.594987 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88ed22ee-b6a2-40a0-b1e0-7a1589b987ec-client-ca\") pod \"route-controller-manager-58d6bbdd59-x8fgm\" (UID: \"88ed22ee-b6a2-40a0-b1e0-7a1589b987ec\") " pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.595010 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e15ad3c-685b-4465-a63b-d9cab3182910-config\") pod \"controller-manager-67f868f89c-chhdk\" (UID: \"6e15ad3c-685b-4465-a63b-d9cab3182910\") " pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.595033 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e15ad3c-685b-4465-a63b-d9cab3182910-client-ca\") pod \"controller-manager-67f868f89c-chhdk\" (UID: \"6e15ad3c-685b-4465-a63b-d9cab3182910\") " pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.595061 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs254\" (UniqueName: \"kubernetes.io/projected/88ed22ee-b6a2-40a0-b1e0-7a1589b987ec-kube-api-access-hs254\") pod \"route-controller-manager-58d6bbdd59-x8fgm\" (UID: \"88ed22ee-b6a2-40a0-b1e0-7a1589b987ec\") " pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.595085 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e15ad3c-685b-4465-a63b-d9cab3182910-proxy-ca-bundles\") pod \"controller-manager-67f868f89c-chhdk\" (UID: \"6e15ad3c-685b-4465-a63b-d9cab3182910\") " pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.595103 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e15ad3c-685b-4465-a63b-d9cab3182910-serving-cert\") pod \"controller-manager-67f868f89c-chhdk\" (UID: \"6e15ad3c-685b-4465-a63b-d9cab3182910\") " pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.595120 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ed22ee-b6a2-40a0-b1e0-7a1589b987ec-config\") pod \"route-controller-manager-58d6bbdd59-x8fgm\" (UID: \"88ed22ee-b6a2-40a0-b1e0-7a1589b987ec\") " pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.595310 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2fgs\" (UniqueName: \"kubernetes.io/projected/6e15ad3c-685b-4465-a63b-d9cab3182910-kube-api-access-t2fgs\") pod \"controller-manager-67f868f89c-chhdk\" (UID: \"6e15ad3c-685b-4465-a63b-d9cab3182910\") " pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.595412 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3d50439-48ba-4960-9502-f093035ded22-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.595443 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3d50439-48ba-4960-9502-f093035ded22-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.595454 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5fhp\" (UniqueName: \"kubernetes.io/projected/b3d50439-48ba-4960-9502-f093035ded22-kube-api-access-f5fhp\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.603076 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" event={"ID":"0aad3f9b-a1da-4c15-a59c-3a0586036d85","Type":"ContainerDied","Data":"0a0332fd29837de0aba59c83f2ed0a3adca3b74e7ca0f87c6538c63b05fc4dc1"} Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.603330 4872 scope.go:117] "RemoveContainer" containerID="1c1b95db825475dfae017724299cba97443e59158c00bc33d8c4aca17d865128" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.603121 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7489f69d5-66pwf" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.604291 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" event={"ID":"b3d50439-48ba-4960-9502-f093035ded22","Type":"ContainerDied","Data":"9d9d6fed39e590de18e5999d25ac431316ea9101c5fa7e9a5feeba8af466dc7b"} Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.604364 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.621049 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7489f69d5-66pwf"] Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.624392 4872 scope.go:117] "RemoveContainer" containerID="b57fb1cdfe7b2743af9b31a4f49d28b35f7a271000db670069cce02f813e46d5" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.628414 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7489f69d5-66pwf"] Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.647823 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r"] Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.651676 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59f7c674b7-w4g4r"] Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.696095 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ed22ee-b6a2-40a0-b1e0-7a1589b987ec-config\") pod \"route-controller-manager-58d6bbdd59-x8fgm\" (UID: \"88ed22ee-b6a2-40a0-b1e0-7a1589b987ec\") " pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.696354 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2fgs\" (UniqueName: \"kubernetes.io/projected/6e15ad3c-685b-4465-a63b-d9cab3182910-kube-api-access-t2fgs\") pod \"controller-manager-67f868f89c-chhdk\" (UID: \"6e15ad3c-685b-4465-a63b-d9cab3182910\") " pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.696466 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88ed22ee-b6a2-40a0-b1e0-7a1589b987ec-serving-cert\") pod \"route-controller-manager-58d6bbdd59-x8fgm\" (UID: \"88ed22ee-b6a2-40a0-b1e0-7a1589b987ec\") " pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.696577 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88ed22ee-b6a2-40a0-b1e0-7a1589b987ec-client-ca\") pod \"route-controller-manager-58d6bbdd59-x8fgm\" (UID: \"88ed22ee-b6a2-40a0-b1e0-7a1589b987ec\") " pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.696661 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e15ad3c-685b-4465-a63b-d9cab3182910-config\") pod \"controller-manager-67f868f89c-chhdk\" (UID: \"6e15ad3c-685b-4465-a63b-d9cab3182910\") " pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.696740 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e15ad3c-685b-4465-a63b-d9cab3182910-client-ca\") pod \"controller-manager-67f868f89c-chhdk\" (UID: \"6e15ad3c-685b-4465-a63b-d9cab3182910\") " pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.696834 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs254\" (UniqueName: \"kubernetes.io/projected/88ed22ee-b6a2-40a0-b1e0-7a1589b987ec-kube-api-access-hs254\") pod \"route-controller-manager-58d6bbdd59-x8fgm\" (UID: \"88ed22ee-b6a2-40a0-b1e0-7a1589b987ec\") " pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.696928 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e15ad3c-685b-4465-a63b-d9cab3182910-proxy-ca-bundles\") pod \"controller-manager-67f868f89c-chhdk\" (UID: \"6e15ad3c-685b-4465-a63b-d9cab3182910\") " pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.697008 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e15ad3c-685b-4465-a63b-d9cab3182910-serving-cert\") pod \"controller-manager-67f868f89c-chhdk\" (UID: \"6e15ad3c-685b-4465-a63b-d9cab3182910\") " pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.697434 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88ed22ee-b6a2-40a0-b1e0-7a1589b987ec-client-ca\") pod \"route-controller-manager-58d6bbdd59-x8fgm\" (UID: \"88ed22ee-b6a2-40a0-b1e0-7a1589b987ec\") " pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.697568 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e15ad3c-685b-4465-a63b-d9cab3182910-client-ca\") pod \"controller-manager-67f868f89c-chhdk\" (UID: \"6e15ad3c-685b-4465-a63b-d9cab3182910\") " pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.697947 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e15ad3c-685b-4465-a63b-d9cab3182910-config\") pod \"controller-manager-67f868f89c-chhdk\" (UID: \"6e15ad3c-685b-4465-a63b-d9cab3182910\") " pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.698263 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ed22ee-b6a2-40a0-b1e0-7a1589b987ec-config\") pod \"route-controller-manager-58d6bbdd59-x8fgm\" (UID: \"88ed22ee-b6a2-40a0-b1e0-7a1589b987ec\") " pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.698544 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e15ad3c-685b-4465-a63b-d9cab3182910-proxy-ca-bundles\") pod \"controller-manager-67f868f89c-chhdk\" (UID: \"6e15ad3c-685b-4465-a63b-d9cab3182910\") " pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.700718 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e15ad3c-685b-4465-a63b-d9cab3182910-serving-cert\") pod \"controller-manager-67f868f89c-chhdk\" (UID: \"6e15ad3c-685b-4465-a63b-d9cab3182910\") " pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.701670 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88ed22ee-b6a2-40a0-b1e0-7a1589b987ec-serving-cert\") pod \"route-controller-manager-58d6bbdd59-x8fgm\" (UID: \"88ed22ee-b6a2-40a0-b1e0-7a1589b987ec\") " pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.713939 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs254\" (UniqueName: \"kubernetes.io/projected/88ed22ee-b6a2-40a0-b1e0-7a1589b987ec-kube-api-access-hs254\") pod \"route-controller-manager-58d6bbdd59-x8fgm\" (UID: \"88ed22ee-b6a2-40a0-b1e0-7a1589b987ec\") " pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.718256 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2fgs\" (UniqueName: \"kubernetes.io/projected/6e15ad3c-685b-4465-a63b-d9cab3182910-kube-api-access-t2fgs\") pod \"controller-manager-67f868f89c-chhdk\" (UID: \"6e15ad3c-685b-4465-a63b-d9cab3182910\") " pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.843604 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:32 crc kubenswrapper[4872]: I0127 06:57:32.852097 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" Jan 27 06:57:33 crc kubenswrapper[4872]: I0127 06:57:33.125340 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67f868f89c-chhdk"] Jan 27 06:57:33 crc kubenswrapper[4872]: W0127 06:57:33.130654 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e15ad3c_685b_4465_a63b_d9cab3182910.slice/crio-437205e140cc299cad313c9c5d761d74f2487607aad6fca5c6ecea1622d3e936 WatchSource:0}: Error finding container 437205e140cc299cad313c9c5d761d74f2487607aad6fca5c6ecea1622d3e936: Status 404 returned error can't find the container with id 437205e140cc299cad313c9c5d761d74f2487607aad6fca5c6ecea1622d3e936 Jan 27 06:57:33 crc kubenswrapper[4872]: I0127 06:57:33.276278 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm"] Jan 27 06:57:33 crc kubenswrapper[4872]: I0127 06:57:33.611209 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" event={"ID":"6e15ad3c-685b-4465-a63b-d9cab3182910","Type":"ContainerStarted","Data":"3fbf3adcef226189d83daf775ba6ee0a7fafea6307d9dfc8ecfaa09732f4bd5b"} Jan 27 06:57:33 crc kubenswrapper[4872]: I0127 06:57:33.611762 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" event={"ID":"6e15ad3c-685b-4465-a63b-d9cab3182910","Type":"ContainerStarted","Data":"437205e140cc299cad313c9c5d761d74f2487607aad6fca5c6ecea1622d3e936"} Jan 27 06:57:33 crc kubenswrapper[4872]: I0127 06:57:33.611781 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:33 crc kubenswrapper[4872]: I0127 06:57:33.615150 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" event={"ID":"88ed22ee-b6a2-40a0-b1e0-7a1589b987ec","Type":"ContainerStarted","Data":"5e486ddd016b2951604129ea9edf8ae4c50d22454727a66c41948bf5f974b730"} Jan 27 06:57:33 crc kubenswrapper[4872]: I0127 06:57:33.615193 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" event={"ID":"88ed22ee-b6a2-40a0-b1e0-7a1589b987ec","Type":"ContainerStarted","Data":"f8cf28c6a8dbab01640628b3cf6724bc7e37443746f243d98e082a57e8d7c1c0"} Jan 27 06:57:33 crc kubenswrapper[4872]: I0127 06:57:33.615681 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" Jan 27 06:57:33 crc kubenswrapper[4872]: I0127 06:57:33.639663 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" podStartSLOduration=2.639639308 podStartE2EDuration="2.639639308s" podCreationTimestamp="2026-01-27 06:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:57:33.635299917 +0000 UTC m=+230.162775113" watchObservedRunningTime="2026-01-27 06:57:33.639639308 +0000 UTC m=+230.167114504" Jan 27 06:57:33 crc kubenswrapper[4872]: I0127 06:57:33.661918 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" podStartSLOduration=2.661838404 podStartE2EDuration="2.661838404s" podCreationTimestamp="2026-01-27 06:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:57:33.660895737 +0000 UTC m=+230.188370933" watchObservedRunningTime="2026-01-27 06:57:33.661838404 +0000 UTC m=+230.189313600" Jan 27 06:57:33 crc kubenswrapper[4872]: I0127 06:57:33.676124 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67f868f89c-chhdk" Jan 27 06:57:34 crc kubenswrapper[4872]: I0127 06:57:34.104275 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aad3f9b-a1da-4c15-a59c-3a0586036d85" path="/var/lib/kubelet/pods/0aad3f9b-a1da-4c15-a59c-3a0586036d85/volumes" Jan 27 06:57:34 crc kubenswrapper[4872]: I0127 06:57:34.105268 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3d50439-48ba-4960-9502-f093035ded22" path="/var/lib/kubelet/pods/b3d50439-48ba-4960-9502-f093035ded22/volumes" Jan 27 06:57:34 crc kubenswrapper[4872]: I0127 06:57:34.232937 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58d6bbdd59-x8fgm" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.381393 4872 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.382575 4872 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.382698 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.382866 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8" gracePeriod=15 Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.382953 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805" gracePeriod=15 Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.382995 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb" gracePeriod=15 Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.383038 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c" gracePeriod=15 Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.383047 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9" gracePeriod=15 Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.383699 4872 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 06:57:41 crc kubenswrapper[4872]: E0127 06:57:41.383883 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.383898 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 06:57:41 crc kubenswrapper[4872]: E0127 06:57:41.383909 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.383915 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 06:57:41 crc kubenswrapper[4872]: E0127 06:57:41.383930 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.383972 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 06:57:41 crc kubenswrapper[4872]: E0127 06:57:41.383991 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.384000 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 06:57:41 crc kubenswrapper[4872]: E0127 06:57:41.384009 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.384016 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:57:41 crc kubenswrapper[4872]: E0127 06:57:41.384026 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.384035 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 06:57:41 crc kubenswrapper[4872]: E0127 06:57:41.384045 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.384051 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.384178 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.384195 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.384213 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.384223 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.384234 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.384242 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.418338 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.498599 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.498651 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.498686 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.498709 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.498732 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.498751 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.498764 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.498792 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.600645 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.600717 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.600764 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.600830 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.600797 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.600886 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.600888 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.600923 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.600928 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.600948 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.600962 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.600985 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.601008 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.601039 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.601044 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.601061 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.657383 4872 generic.go:334] "Generic (PLEG): container finished" podID="5e764f4a-dc17-468f-9f40-da1cd46c4098" containerID="2af3c5ed4af28705b77e48e6a06c2bb00619465facb4dd7b15328c2b0836bd2f" exitCode=0 Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.657464 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5e764f4a-dc17-468f-9f40-da1cd46c4098","Type":"ContainerDied","Data":"2af3c5ed4af28705b77e48e6a06c2bb00619465facb4dd7b15328c2b0836bd2f"} Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.658297 4872 status_manager.go:851] "Failed to get status for pod" podUID="5e764f4a-dc17-468f-9f40-da1cd46c4098" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.658604 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.659009 4872 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.660628 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.661781 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.662407 4872 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c" exitCode=0 Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.662437 4872 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805" exitCode=0 Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.662447 4872 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb" exitCode=0 Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.662455 4872 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9" exitCode=2 Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.662493 4872 scope.go:117] "RemoveContainer" containerID="3e19c06a1e23d93f1221055d5d48d6fef5a35bd98aafdac2972cc84b4e277343" Jan 27 06:57:41 crc kubenswrapper[4872]: I0127 06:57:41.716008 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:57:41 crc kubenswrapper[4872]: W0127 06:57:41.733234 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-a2e15d392feb536216b9ea723b588a6ffe4fad7413dde9780c9361b52ea3c3a0 WatchSource:0}: Error finding container a2e15d392feb536216b9ea723b588a6ffe4fad7413dde9780c9361b52ea3c3a0: Status 404 returned error can't find the container with id a2e15d392feb536216b9ea723b588a6ffe4fad7413dde9780c9361b52ea3c3a0 Jan 27 06:57:41 crc kubenswrapper[4872]: E0127 06:57:41.736339 4872 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e8433d3dc1da9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 06:57:41.735800233 +0000 UTC m=+238.263275419,LastTimestamp:2026-01-27 06:57:41.735800233 +0000 UTC m=+238.263275419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 06:57:42 crc kubenswrapper[4872]: I0127 06:57:42.670014 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 06:57:42 crc kubenswrapper[4872]: I0127 06:57:42.671928 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e11c1291b1986cb2f5f9d947fb6a58aa9e0f775fb3adf910babf76fb1872421e"} Jan 27 06:57:42 crc kubenswrapper[4872]: I0127 06:57:42.671955 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a2e15d392feb536216b9ea723b588a6ffe4fad7413dde9780c9361b52ea3c3a0"} Jan 27 06:57:42 crc kubenswrapper[4872]: I0127 06:57:42.672452 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:42 crc kubenswrapper[4872]: I0127 06:57:42.672684 4872 status_manager.go:851] "Failed to get status for pod" podUID="5e764f4a-dc17-468f-9f40-da1cd46c4098" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.006169 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.006930 4872 status_manager.go:851] "Failed to get status for pod" podUID="5e764f4a-dc17-468f-9f40-da1cd46c4098" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.007407 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.015191 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e764f4a-dc17-468f-9f40-da1cd46c4098-kubelet-dir\") pod \"5e764f4a-dc17-468f-9f40-da1cd46c4098\" (UID: \"5e764f4a-dc17-468f-9f40-da1cd46c4098\") " Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.015490 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5e764f4a-dc17-468f-9f40-da1cd46c4098-var-lock\") pod \"5e764f4a-dc17-468f-9f40-da1cd46c4098\" (UID: \"5e764f4a-dc17-468f-9f40-da1cd46c4098\") " Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.015674 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e764f4a-dc17-468f-9f40-da1cd46c4098-kube-api-access\") pod \"5e764f4a-dc17-468f-9f40-da1cd46c4098\" (UID: \"5e764f4a-dc17-468f-9f40-da1cd46c4098\") " Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.015305 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e764f4a-dc17-468f-9f40-da1cd46c4098-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5e764f4a-dc17-468f-9f40-da1cd46c4098" (UID: "5e764f4a-dc17-468f-9f40-da1cd46c4098"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.015538 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e764f4a-dc17-468f-9f40-da1cd46c4098-var-lock" (OuterVolumeSpecName: "var-lock") pod "5e764f4a-dc17-468f-9f40-da1cd46c4098" (UID: "5e764f4a-dc17-468f-9f40-da1cd46c4098"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.016097 4872 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e764f4a-dc17-468f-9f40-da1cd46c4098-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.016173 4872 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5e764f4a-dc17-468f-9f40-da1cd46c4098-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.022496 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e764f4a-dc17-468f-9f40-da1cd46c4098-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5e764f4a-dc17-468f-9f40-da1cd46c4098" (UID: "5e764f4a-dc17-468f-9f40-da1cd46c4098"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.116966 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e764f4a-dc17-468f-9f40-da1cd46c4098-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:43 crc kubenswrapper[4872]: E0127 06:57:43.506883 4872 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:43 crc kubenswrapper[4872]: E0127 06:57:43.507615 4872 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:43 crc kubenswrapper[4872]: E0127 06:57:43.509147 4872 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:43 crc kubenswrapper[4872]: E0127 06:57:43.509526 4872 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:43 crc kubenswrapper[4872]: E0127 06:57:43.509719 4872 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.509748 4872 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 06:57:43 crc kubenswrapper[4872]: E0127 06:57:43.510097 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="200ms" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.678470 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.678535 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5e764f4a-dc17-468f-9f40-da1cd46c4098","Type":"ContainerDied","Data":"bc3d1e0b589a56059ec5b1e3a666cd9cde4bf0799e61205635dda58dda5825aa"} Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.678573 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc3d1e0b589a56059ec5b1e3a666cd9cde4bf0799e61205635dda58dda5825aa" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.690743 4872 status_manager.go:851] "Failed to get status for pod" podUID="5e764f4a-dc17-468f-9f40-da1cd46c4098" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.691225 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:43 crc kubenswrapper[4872]: E0127 06:57:43.710769 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="400ms" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.843890 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.844617 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.845132 4872 status_manager.go:851] "Failed to get status for pod" podUID="5e764f4a-dc17-468f-9f40-da1cd46c4098" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.845430 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.845871 4872 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.926831 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.926949 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.926984 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.927060 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.927054 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.927127 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.927311 4872 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.927330 4872 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:43 crc kubenswrapper[4872]: I0127 06:57:43.927344 4872 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.101058 4872 status_manager.go:851] "Failed to get status for pod" podUID="5e764f4a-dc17-468f-9f40-da1cd46c4098" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.101422 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.101705 4872 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.107117 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 06:57:44 crc kubenswrapper[4872]: E0127 06:57:44.112225 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="800ms" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.688495 4872 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8" exitCode=0 Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.688625 4872 scope.go:117] "RemoveContainer" containerID="62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.688699 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.690376 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.690870 4872 status_manager.go:851] "Failed to get status for pod" podUID="5e764f4a-dc17-468f-9f40-da1cd46c4098" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.691354 4872 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.694325 4872 status_manager.go:851] "Failed to get status for pod" podUID="5e764f4a-dc17-468f-9f40-da1cd46c4098" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.694864 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.695170 4872 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.707090 4872 scope.go:117] "RemoveContainer" containerID="5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.720360 4872 scope.go:117] "RemoveContainer" containerID="1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.734726 4872 scope.go:117] "RemoveContainer" containerID="6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.750689 4872 scope.go:117] "RemoveContainer" containerID="d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.765122 4872 scope.go:117] "RemoveContainer" containerID="4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.785701 4872 scope.go:117] "RemoveContainer" containerID="62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c" Jan 27 06:57:44 crc kubenswrapper[4872]: E0127 06:57:44.786151 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\": container with ID starting with 62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c not found: ID does not exist" containerID="62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.786189 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c"} err="failed to get container status \"62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\": rpc error: code = NotFound desc = could not find container \"62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c\": container with ID starting with 62853b04d9b9b98e4daea69d659dfea5725723728a650fe40853d3673fcb031c not found: ID does not exist" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.786213 4872 scope.go:117] "RemoveContainer" containerID="5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805" Jan 27 06:57:44 crc kubenswrapper[4872]: E0127 06:57:44.786671 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\": container with ID starting with 5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805 not found: ID does not exist" containerID="5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.786689 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805"} err="failed to get container status \"5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\": rpc error: code = NotFound desc = could not find container \"5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805\": container with ID starting with 5801c35817dcbb5fb9c16a882fdd98b5741e53dbbe42a3c0744efe1343a66805 not found: ID does not exist" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.786706 4872 scope.go:117] "RemoveContainer" containerID="1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb" Jan 27 06:57:44 crc kubenswrapper[4872]: E0127 06:57:44.787480 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\": container with ID starting with 1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb not found: ID does not exist" containerID="1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.787496 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb"} err="failed to get container status \"1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\": rpc error: code = NotFound desc = could not find container \"1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb\": container with ID starting with 1dd8db141b42c938904833dc136ad05a0ae9f0f0a4f7ddeaa06449f0ac4a66cb not found: ID does not exist" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.787512 4872 scope.go:117] "RemoveContainer" containerID="6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9" Jan 27 06:57:44 crc kubenswrapper[4872]: E0127 06:57:44.787961 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\": container with ID starting with 6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9 not found: ID does not exist" containerID="6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.787979 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9"} err="failed to get container status \"6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\": rpc error: code = NotFound desc = could not find container \"6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9\": container with ID starting with 6f80ba3881b72417957ceb03ec559d00521d7a59e0dbaf99b604fa087faf07a9 not found: ID does not exist" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.787990 4872 scope.go:117] "RemoveContainer" containerID="d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8" Jan 27 06:57:44 crc kubenswrapper[4872]: E0127 06:57:44.788262 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\": container with ID starting with d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8 not found: ID does not exist" containerID="d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.788291 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8"} err="failed to get container status \"d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\": rpc error: code = NotFound desc = could not find container \"d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8\": container with ID starting with d78adef2b7e3b0dfa0b6c35ffd9dc91921be694653a4a1725f1cfc813a483aa8 not found: ID does not exist" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.788314 4872 scope.go:117] "RemoveContainer" containerID="4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01" Jan 27 06:57:44 crc kubenswrapper[4872]: E0127 06:57:44.788519 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\": container with ID starting with 4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01 not found: ID does not exist" containerID="4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01" Jan 27 06:57:44 crc kubenswrapper[4872]: I0127 06:57:44.788536 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01"} err="failed to get container status \"4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\": rpc error: code = NotFound desc = could not find container \"4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01\": container with ID starting with 4f8c70a3a018a4f8b007d25dee99bda70402e2328787ee31b0b581a43a4ccd01 not found: ID does not exist" Jan 27 06:57:44 crc kubenswrapper[4872]: E0127 06:57:44.881762 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:57:44Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:57:44Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:57:44Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:57:44Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:44 crc kubenswrapper[4872]: E0127 06:57:44.882274 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:44 crc kubenswrapper[4872]: E0127 06:57:44.882464 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:44 crc kubenswrapper[4872]: E0127 06:57:44.882609 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:44 crc kubenswrapper[4872]: E0127 06:57:44.882747 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:44 crc kubenswrapper[4872]: E0127 06:57:44.882767 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:57:44 crc kubenswrapper[4872]: E0127 06:57:44.913095 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="1.6s" Jan 27 06:57:46 crc kubenswrapper[4872]: E0127 06:57:46.514343 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="3.2s" Jan 27 06:57:49 crc kubenswrapper[4872]: I0127 06:57:49.552669 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" podUID="a31dcc31-ff38-40f9-b26d-fb3757f651c5" containerName="oauth-openshift" containerID="cri-o://142789687c52d212a5ca92d2a05edd9ae4edc1cdcccf56fe1e2e2d72b5ad87a0" gracePeriod=15 Jan 27 06:57:49 crc kubenswrapper[4872]: I0127 06:57:49.716595 4872 generic.go:334] "Generic (PLEG): container finished" podID="a31dcc31-ff38-40f9-b26d-fb3757f651c5" containerID="142789687c52d212a5ca92d2a05edd9ae4edc1cdcccf56fe1e2e2d72b5ad87a0" exitCode=0 Jan 27 06:57:49 crc kubenswrapper[4872]: I0127 06:57:49.716636 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" event={"ID":"a31dcc31-ff38-40f9-b26d-fb3757f651c5","Type":"ContainerDied","Data":"142789687c52d212a5ca92d2a05edd9ae4edc1cdcccf56fe1e2e2d72b5ad87a0"} Jan 27 06:57:49 crc kubenswrapper[4872]: E0127 06:57:49.717561 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="6.4s" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.029405 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.030290 4872 status_manager.go:851] "Failed to get status for pod" podUID="a31dcc31-ff38-40f9-b26d-fb3757f651c5" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-vvsln\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.030792 4872 status_manager.go:851] "Failed to get status for pod" podUID="5e764f4a-dc17-468f-9f40-da1cd46c4098" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.031209 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.121203 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-idp-0-file-data\") pod \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.121271 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-service-ca\") pod \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.121290 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-provider-selection\") pod \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.121308 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-cliconfig\") pod \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.121327 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-audit-policies\") pod \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.121368 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-error\") pod \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.121407 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-login\") pod \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.121435 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-router-certs\") pod \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.121450 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-serving-cert\") pod \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.121469 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-ocp-branding-template\") pod \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.121499 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc7ts\" (UniqueName: \"kubernetes.io/projected/a31dcc31-ff38-40f9-b26d-fb3757f651c5-kube-api-access-tc7ts\") pod \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.121529 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-session\") pod \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.121561 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-trusted-ca-bundle\") pod \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.121574 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a31dcc31-ff38-40f9-b26d-fb3757f651c5-audit-dir\") pod \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\" (UID: \"a31dcc31-ff38-40f9-b26d-fb3757f651c5\") " Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.121816 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a31dcc31-ff38-40f9-b26d-fb3757f651c5-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a31dcc31-ff38-40f9-b26d-fb3757f651c5" (UID: "a31dcc31-ff38-40f9-b26d-fb3757f651c5"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.122237 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "a31dcc31-ff38-40f9-b26d-fb3757f651c5" (UID: "a31dcc31-ff38-40f9-b26d-fb3757f651c5"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.122250 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "a31dcc31-ff38-40f9-b26d-fb3757f651c5" (UID: "a31dcc31-ff38-40f9-b26d-fb3757f651c5"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.122626 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "a31dcc31-ff38-40f9-b26d-fb3757f651c5" (UID: "a31dcc31-ff38-40f9-b26d-fb3757f651c5"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.123100 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "a31dcc31-ff38-40f9-b26d-fb3757f651c5" (UID: "a31dcc31-ff38-40f9-b26d-fb3757f651c5"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.127492 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31dcc31-ff38-40f9-b26d-fb3757f651c5-kube-api-access-tc7ts" (OuterVolumeSpecName: "kube-api-access-tc7ts") pod "a31dcc31-ff38-40f9-b26d-fb3757f651c5" (UID: "a31dcc31-ff38-40f9-b26d-fb3757f651c5"). InnerVolumeSpecName "kube-api-access-tc7ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.127600 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "a31dcc31-ff38-40f9-b26d-fb3757f651c5" (UID: "a31dcc31-ff38-40f9-b26d-fb3757f651c5"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.128158 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "a31dcc31-ff38-40f9-b26d-fb3757f651c5" (UID: "a31dcc31-ff38-40f9-b26d-fb3757f651c5"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.130657 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "a31dcc31-ff38-40f9-b26d-fb3757f651c5" (UID: "a31dcc31-ff38-40f9-b26d-fb3757f651c5"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.130916 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "a31dcc31-ff38-40f9-b26d-fb3757f651c5" (UID: "a31dcc31-ff38-40f9-b26d-fb3757f651c5"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.131485 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "a31dcc31-ff38-40f9-b26d-fb3757f651c5" (UID: "a31dcc31-ff38-40f9-b26d-fb3757f651c5"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.131703 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "a31dcc31-ff38-40f9-b26d-fb3757f651c5" (UID: "a31dcc31-ff38-40f9-b26d-fb3757f651c5"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.136047 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "a31dcc31-ff38-40f9-b26d-fb3757f651c5" (UID: "a31dcc31-ff38-40f9-b26d-fb3757f651c5"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.136687 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "a31dcc31-ff38-40f9-b26d-fb3757f651c5" (UID: "a31dcc31-ff38-40f9-b26d-fb3757f651c5"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.222819 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.222876 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.222890 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.222901 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.222911 4872 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.222919 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.222928 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.222937 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.222946 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.222957 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.222966 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc7ts\" (UniqueName: \"kubernetes.io/projected/a31dcc31-ff38-40f9-b26d-fb3757f651c5-kube-api-access-tc7ts\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.222974 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.222982 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a31dcc31-ff38-40f9-b26d-fb3757f651c5-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.223009 4872 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a31dcc31-ff38-40f9-b26d-fb3757f651c5-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.722199 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" event={"ID":"a31dcc31-ff38-40f9-b26d-fb3757f651c5","Type":"ContainerDied","Data":"9caeef24e92a562a561c1d76c200ddd51875af403197ad8ea2ee32190600c0b7"} Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.722225 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.722265 4872 scope.go:117] "RemoveContainer" containerID="142789687c52d212a5ca92d2a05edd9ae4edc1cdcccf56fe1e2e2d72b5ad87a0" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.722748 4872 status_manager.go:851] "Failed to get status for pod" podUID="5e764f4a-dc17-468f-9f40-da1cd46c4098" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.722939 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.723811 4872 status_manager.go:851] "Failed to get status for pod" podUID="a31dcc31-ff38-40f9-b26d-fb3757f651c5" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-vvsln\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.739270 4872 status_manager.go:851] "Failed to get status for pod" podUID="a31dcc31-ff38-40f9-b26d-fb3757f651c5" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-vvsln\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.739718 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:50 crc kubenswrapper[4872]: I0127 06:57:50.739954 4872 status_manager.go:851] "Failed to get status for pod" podUID="5e764f4a-dc17-468f-9f40-da1cd46c4098" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:50 crc kubenswrapper[4872]: E0127 06:57:50.841810 4872 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e8433d3dc1da9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 06:57:41.735800233 +0000 UTC m=+238.263275419,LastTimestamp:2026-01-27 06:57:41.735800233 +0000 UTC m=+238.263275419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 06:57:52 crc kubenswrapper[4872]: I0127 06:57:52.097304 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:52 crc kubenswrapper[4872]: I0127 06:57:52.098221 4872 status_manager.go:851] "Failed to get status for pod" podUID="a31dcc31-ff38-40f9-b26d-fb3757f651c5" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-vvsln\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:52 crc kubenswrapper[4872]: I0127 06:57:52.098624 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:52 crc kubenswrapper[4872]: I0127 06:57:52.099082 4872 status_manager.go:851] "Failed to get status for pod" podUID="5e764f4a-dc17-468f-9f40-da1cd46c4098" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:52 crc kubenswrapper[4872]: I0127 06:57:52.110065 4872 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7d3d5eed-f827-4822-86b4-b98a91ac772a" Jan 27 06:57:52 crc kubenswrapper[4872]: I0127 06:57:52.110098 4872 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7d3d5eed-f827-4822-86b4-b98a91ac772a" Jan 27 06:57:52 crc kubenswrapper[4872]: E0127 06:57:52.110496 4872 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:52 crc kubenswrapper[4872]: I0127 06:57:52.110974 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:52 crc kubenswrapper[4872]: I0127 06:57:52.733948 4872 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="6d022c4099f84cfa19ff2e90745368b2759a12769b92a505430bcf47e46d5ca5" exitCode=0 Jan 27 06:57:52 crc kubenswrapper[4872]: I0127 06:57:52.734084 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"6d022c4099f84cfa19ff2e90745368b2759a12769b92a505430bcf47e46d5ca5"} Jan 27 06:57:52 crc kubenswrapper[4872]: I0127 06:57:52.734282 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"01d136b36265d0131e3e2eca1436149b7bc651d4fd6b15913e06c802c6c89487"} Jan 27 06:57:52 crc kubenswrapper[4872]: I0127 06:57:52.734565 4872 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7d3d5eed-f827-4822-86b4-b98a91ac772a" Jan 27 06:57:52 crc kubenswrapper[4872]: I0127 06:57:52.734582 4872 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7d3d5eed-f827-4822-86b4-b98a91ac772a" Jan 27 06:57:52 crc kubenswrapper[4872]: I0127 06:57:52.735445 4872 status_manager.go:851] "Failed to get status for pod" podUID="a31dcc31-ff38-40f9-b26d-fb3757f651c5" pod="openshift-authentication/oauth-openshift-558db77b4-vvsln" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-vvsln\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:52 crc kubenswrapper[4872]: E0127 06:57:52.735465 4872 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:52 crc kubenswrapper[4872]: I0127 06:57:52.735985 4872 status_manager.go:851] "Failed to get status for pod" podUID="5e764f4a-dc17-468f-9f40-da1cd46c4098" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:52 crc kubenswrapper[4872]: I0127 06:57:52.736433 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Jan 27 06:57:53 crc kubenswrapper[4872]: I0127 06:57:53.741693 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"37e1af232e5dfb692105b59562642e9e921dcbe3124e0a6f77508596d2df6b37"} Jan 27 06:57:53 crc kubenswrapper[4872]: I0127 06:57:53.741749 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"86a986b04d88b2dfbbd5271e5140ca85fad6a0b84b3aeeb35219edf10fdbb6c2"} Jan 27 06:57:53 crc kubenswrapper[4872]: I0127 06:57:53.741763 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1b79d1918a276b05398b7d3a26c3a9444b864159e4f8b83472317e68309ec959"} Jan 27 06:57:54 crc kubenswrapper[4872]: I0127 06:57:54.749098 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"000e3cca21c6cb6be1e44de9055c734d5847b2fcf649b807128b82415903a5ea"} Jan 27 06:57:54 crc kubenswrapper[4872]: I0127 06:57:54.749931 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:54 crc kubenswrapper[4872]: I0127 06:57:54.750008 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"508e4640b6526d9a36ebbec7d5b6b85a9f29e4f83dbc179ad07c2370ab6a17c6"} Jan 27 06:57:54 crc kubenswrapper[4872]: I0127 06:57:54.749371 4872 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7d3d5eed-f827-4822-86b4-b98a91ac772a" Jan 27 06:57:54 crc kubenswrapper[4872]: I0127 06:57:54.750108 4872 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7d3d5eed-f827-4822-86b4-b98a91ac772a" Jan 27 06:57:57 crc kubenswrapper[4872]: I0127 06:57:57.111481 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:57 crc kubenswrapper[4872]: I0127 06:57:57.111812 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:57 crc kubenswrapper[4872]: I0127 06:57:57.117162 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:59 crc kubenswrapper[4872]: I0127 06:57:59.760906 4872 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:57:59 crc kubenswrapper[4872]: I0127 06:57:59.777705 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 06:57:59 crc kubenswrapper[4872]: I0127 06:57:59.777757 4872 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51" exitCode=1 Jan 27 06:57:59 crc kubenswrapper[4872]: I0127 06:57:59.777796 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51"} Jan 27 06:57:59 crc kubenswrapper[4872]: I0127 06:57:59.778451 4872 scope.go:117] "RemoveContainer" containerID="6b1f18ab3bef75dde345964c48f8a80ae54fdd4f3f51f9cfa4c2fb899c90fc51" Jan 27 06:58:00 crc kubenswrapper[4872]: I0127 06:58:00.797956 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 06:58:00 crc kubenswrapper[4872]: I0127 06:58:00.799865 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"91e17a74ce5ed977f20cf9a9158e4c60c1ee3335e8b1bf18212b5aa6e9525758"} Jan 27 06:58:00 crc kubenswrapper[4872]: I0127 06:58:00.800225 4872 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7d3d5eed-f827-4822-86b4-b98a91ac772a" Jan 27 06:58:00 crc kubenswrapper[4872]: I0127 06:58:00.800261 4872 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7d3d5eed-f827-4822-86b4-b98a91ac772a" Jan 27 06:58:00 crc kubenswrapper[4872]: I0127 06:58:00.805035 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:58:00 crc kubenswrapper[4872]: I0127 06:58:00.821126 4872 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0d51d896-66e4-4c03-8749-b7afbce81cd4" Jan 27 06:58:01 crc kubenswrapper[4872]: I0127 06:58:01.804923 4872 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7d3d5eed-f827-4822-86b4-b98a91ac772a" Jan 27 06:58:01 crc kubenswrapper[4872]: I0127 06:58:01.804953 4872 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7d3d5eed-f827-4822-86b4-b98a91ac772a" Jan 27 06:58:04 crc kubenswrapper[4872]: I0127 06:58:04.120285 4872 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0d51d896-66e4-4c03-8749-b7afbce81cd4" Jan 27 06:58:04 crc kubenswrapper[4872]: I0127 06:58:04.919732 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:58:04 crc kubenswrapper[4872]: I0127 06:58:04.923300 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:58:05 crc kubenswrapper[4872]: I0127 06:58:05.821790 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:58:06 crc kubenswrapper[4872]: I0127 06:58:06.146706 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 06:58:06 crc kubenswrapper[4872]: I0127 06:58:06.163439 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 06:58:06 crc kubenswrapper[4872]: I0127 06:58:06.615397 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 06:58:06 crc kubenswrapper[4872]: I0127 06:58:06.632100 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 06:58:06 crc kubenswrapper[4872]: I0127 06:58:06.693062 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 06:58:07 crc kubenswrapper[4872]: I0127 06:58:07.038825 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 06:58:07 crc kubenswrapper[4872]: I0127 06:58:07.275333 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 06:58:07 crc kubenswrapper[4872]: I0127 06:58:07.540936 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 06:58:07 crc kubenswrapper[4872]: I0127 06:58:07.552759 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 06:58:07 crc kubenswrapper[4872]: I0127 06:58:07.565299 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 06:58:08 crc kubenswrapper[4872]: I0127 06:58:08.035591 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 06:58:08 crc kubenswrapper[4872]: I0127 06:58:08.070644 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 06:58:08 crc kubenswrapper[4872]: I0127 06:58:08.265427 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 06:58:08 crc kubenswrapper[4872]: I0127 06:58:08.377119 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 06:58:08 crc kubenswrapper[4872]: I0127 06:58:08.557236 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 06:58:08 crc kubenswrapper[4872]: I0127 06:58:08.774904 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 06:58:08 crc kubenswrapper[4872]: I0127 06:58:08.812333 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 06:58:08 crc kubenswrapper[4872]: I0127 06:58:08.877186 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 06:58:08 crc kubenswrapper[4872]: I0127 06:58:08.994161 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 06:58:09 crc kubenswrapper[4872]: I0127 06:58:09.223680 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 06:58:09 crc kubenswrapper[4872]: I0127 06:58:09.432468 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 06:58:09 crc kubenswrapper[4872]: I0127 06:58:09.558474 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 06:58:09 crc kubenswrapper[4872]: I0127 06:58:09.559251 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 06:58:09 crc kubenswrapper[4872]: I0127 06:58:09.664134 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 06:58:09 crc kubenswrapper[4872]: I0127 06:58:09.716646 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 06:58:09 crc kubenswrapper[4872]: I0127 06:58:09.921641 4872 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 06:58:10 crc kubenswrapper[4872]: I0127 06:58:10.032098 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 06:58:10 crc kubenswrapper[4872]: I0127 06:58:10.342665 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 06:58:10 crc kubenswrapper[4872]: I0127 06:58:10.473918 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 06:58:10 crc kubenswrapper[4872]: I0127 06:58:10.477867 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 06:58:10 crc kubenswrapper[4872]: I0127 06:58:10.860365 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 06:58:11 crc kubenswrapper[4872]: I0127 06:58:11.340595 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 06:58:11 crc kubenswrapper[4872]: I0127 06:58:11.468461 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 06:58:11 crc kubenswrapper[4872]: I0127 06:58:11.476969 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 06:58:11 crc kubenswrapper[4872]: I0127 06:58:11.672992 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 06:58:12 crc kubenswrapper[4872]: I0127 06:58:12.061227 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 06:58:12 crc kubenswrapper[4872]: I0127 06:58:12.747212 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 06:58:13 crc kubenswrapper[4872]: I0127 06:58:13.037519 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 06:58:13 crc kubenswrapper[4872]: I0127 06:58:13.387433 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 06:58:13 crc kubenswrapper[4872]: I0127 06:58:13.548233 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 06:58:13 crc kubenswrapper[4872]: I0127 06:58:13.586945 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 06:58:13 crc kubenswrapper[4872]: I0127 06:58:13.669878 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 06:58:14 crc kubenswrapper[4872]: I0127 06:58:14.023013 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 06:58:14 crc kubenswrapper[4872]: I0127 06:58:14.058753 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 06:58:14 crc kubenswrapper[4872]: I0127 06:58:14.074575 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 06:58:14 crc kubenswrapper[4872]: I0127 06:58:14.193067 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 06:58:14 crc kubenswrapper[4872]: I0127 06:58:14.276095 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 06:58:14 crc kubenswrapper[4872]: I0127 06:58:14.338066 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 06:58:14 crc kubenswrapper[4872]: I0127 06:58:14.378872 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 06:58:14 crc kubenswrapper[4872]: I0127 06:58:14.601042 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 06:58:14 crc kubenswrapper[4872]: I0127 06:58:14.604207 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 06:58:14 crc kubenswrapper[4872]: I0127 06:58:14.623301 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 06:58:14 crc kubenswrapper[4872]: I0127 06:58:14.672369 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 06:58:15 crc kubenswrapper[4872]: I0127 06:58:15.173649 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 06:58:15 crc kubenswrapper[4872]: I0127 06:58:15.173760 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 06:58:15 crc kubenswrapper[4872]: I0127 06:58:15.292067 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 06:58:15 crc kubenswrapper[4872]: I0127 06:58:15.319360 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 06:58:15 crc kubenswrapper[4872]: I0127 06:58:15.687739 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 06:58:15 crc kubenswrapper[4872]: I0127 06:58:15.719106 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 06:58:15 crc kubenswrapper[4872]: I0127 06:58:15.799389 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 06:58:15 crc kubenswrapper[4872]: I0127 06:58:15.833405 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:58:15 crc kubenswrapper[4872]: I0127 06:58:15.859572 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 06:58:15 crc kubenswrapper[4872]: I0127 06:58:15.872367 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 06:58:16 crc kubenswrapper[4872]: I0127 06:58:16.211005 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 06:58:16 crc kubenswrapper[4872]: I0127 06:58:16.232279 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 06:58:16 crc kubenswrapper[4872]: I0127 06:58:16.325105 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 06:58:16 crc kubenswrapper[4872]: I0127 06:58:16.471790 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 06:58:16 crc kubenswrapper[4872]: I0127 06:58:16.508820 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 06:58:16 crc kubenswrapper[4872]: I0127 06:58:16.510104 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 06:58:16 crc kubenswrapper[4872]: I0127 06:58:16.549365 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 06:58:16 crc kubenswrapper[4872]: I0127 06:58:16.732565 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 06:58:16 crc kubenswrapper[4872]: I0127 06:58:16.942491 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 06:58:17 crc kubenswrapper[4872]: I0127 06:58:17.054597 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 06:58:17 crc kubenswrapper[4872]: I0127 06:58:17.124113 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 06:58:17 crc kubenswrapper[4872]: I0127 06:58:17.212796 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 06:58:17 crc kubenswrapper[4872]: I0127 06:58:17.218658 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 06:58:17 crc kubenswrapper[4872]: I0127 06:58:17.399302 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 06:58:17 crc kubenswrapper[4872]: I0127 06:58:17.709104 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 06:58:17 crc kubenswrapper[4872]: I0127 06:58:17.709464 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 06:58:17 crc kubenswrapper[4872]: I0127 06:58:17.737987 4872 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 06:58:17 crc kubenswrapper[4872]: I0127 06:58:17.757414 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 06:58:17 crc kubenswrapper[4872]: I0127 06:58:17.770531 4872 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 06:58:17 crc kubenswrapper[4872]: I0127 06:58:17.792155 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 06:58:17 crc kubenswrapper[4872]: I0127 06:58:17.960129 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 06:58:17 crc kubenswrapper[4872]: I0127 06:58:17.966526 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 06:58:18 crc kubenswrapper[4872]: I0127 06:58:18.238095 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 06:58:18 crc kubenswrapper[4872]: I0127 06:58:18.286354 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 06:58:18 crc kubenswrapper[4872]: I0127 06:58:18.294346 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 06:58:18 crc kubenswrapper[4872]: I0127 06:58:18.412680 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 06:58:18 crc kubenswrapper[4872]: I0127 06:58:18.468233 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 06:58:18 crc kubenswrapper[4872]: I0127 06:58:18.510996 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 06:58:18 crc kubenswrapper[4872]: I0127 06:58:18.550999 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 06:58:18 crc kubenswrapper[4872]: I0127 06:58:18.667226 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 06:58:18 crc kubenswrapper[4872]: I0127 06:58:18.705892 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 06:58:18 crc kubenswrapper[4872]: I0127 06:58:18.728261 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 06:58:18 crc kubenswrapper[4872]: I0127 06:58:18.786789 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 06:58:18 crc kubenswrapper[4872]: I0127 06:58:18.859046 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 06:58:18 crc kubenswrapper[4872]: I0127 06:58:18.859056 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 06:58:18 crc kubenswrapper[4872]: I0127 06:58:18.900908 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 06:58:18 crc kubenswrapper[4872]: I0127 06:58:18.965982 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 06:58:19 crc kubenswrapper[4872]: I0127 06:58:19.236635 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 06:58:19 crc kubenswrapper[4872]: I0127 06:58:19.250531 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 06:58:19 crc kubenswrapper[4872]: I0127 06:58:19.267264 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 06:58:19 crc kubenswrapper[4872]: I0127 06:58:19.277754 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 06:58:19 crc kubenswrapper[4872]: I0127 06:58:19.389300 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 06:58:19 crc kubenswrapper[4872]: I0127 06:58:19.407287 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 06:58:19 crc kubenswrapper[4872]: I0127 06:58:19.458281 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 06:58:19 crc kubenswrapper[4872]: I0127 06:58:19.556556 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 06:58:19 crc kubenswrapper[4872]: I0127 06:58:19.567473 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 06:58:19 crc kubenswrapper[4872]: I0127 06:58:19.812541 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 06:58:19 crc kubenswrapper[4872]: I0127 06:58:19.825912 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 06:58:19 crc kubenswrapper[4872]: I0127 06:58:19.911780 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 06:58:19 crc kubenswrapper[4872]: I0127 06:58:19.969371 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 06:58:19 crc kubenswrapper[4872]: I0127 06:58:19.982629 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.050167 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.055363 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.097983 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.124949 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.322874 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.439985 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.444586 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.464691 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.505908 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.563481 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.684411 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.737023 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.786284 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.812512 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.818597 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.895954 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.907915 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.933688 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.935377 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 06:58:20 crc kubenswrapper[4872]: I0127 06:58:20.955458 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.006242 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.112478 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.174955 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.260709 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.291323 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.380144 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.466496 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.520574 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.553167 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.677151 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.683279 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.699464 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.710054 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.768371 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.809743 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.870012 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 06:58:21 crc kubenswrapper[4872]: I0127 06:58:21.938605 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.000818 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.095586 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.204296 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.218945 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.321084 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.447748 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.479696 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.517985 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.539616 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.648150 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.662591 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.712165 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.717069 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.764959 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.773685 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.810425 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.946747 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 06:58:22 crc kubenswrapper[4872]: I0127 06:58:22.947557 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.011265 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.088696 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.115706 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.166923 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.169924 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.233881 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.275240 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.328112 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.365180 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.380953 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.386169 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.396402 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.396600 4872 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.423089 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.446474 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.525733 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.605213 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.727588 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.787585 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.810663 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.856794 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 06:58:23 crc kubenswrapper[4872]: I0127 06:58:23.985815 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.076247 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.092131 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.095196 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.103686 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.280531 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.367905 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.369721 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.382704 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.516722 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.568962 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.610949 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.612990 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.665999 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.721875 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.740969 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.785082 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.993430 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 06:58:24 crc kubenswrapper[4872]: I0127 06:58:24.996643 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 06:58:25 crc kubenswrapper[4872]: I0127 06:58:25.088969 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 06:58:25 crc kubenswrapper[4872]: I0127 06:58:25.302462 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 06:58:25 crc kubenswrapper[4872]: I0127 06:58:25.551942 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 06:58:25 crc kubenswrapper[4872]: I0127 06:58:25.580244 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 06:58:25 crc kubenswrapper[4872]: I0127 06:58:25.737211 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 06:58:25 crc kubenswrapper[4872]: I0127 06:58:25.848031 4872 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 06:58:25 crc kubenswrapper[4872]: I0127 06:58:25.857400 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 06:58:25 crc kubenswrapper[4872]: I0127 06:58:25.985138 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 06:58:26 crc kubenswrapper[4872]: I0127 06:58:26.136574 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 06:58:26 crc kubenswrapper[4872]: I0127 06:58:26.156434 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 06:58:26 crc kubenswrapper[4872]: I0127 06:58:26.214573 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 06:58:26 crc kubenswrapper[4872]: I0127 06:58:26.286441 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 06:58:26 crc kubenswrapper[4872]: I0127 06:58:26.468961 4872 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 06:58:26 crc kubenswrapper[4872]: I0127 06:58:26.474183 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=45.474161477 podStartE2EDuration="45.474161477s" podCreationTimestamp="2026-01-27 06:57:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:57:59.673924869 +0000 UTC m=+256.201400075" watchObservedRunningTime="2026-01-27 06:58:26.474161477 +0000 UTC m=+283.001636673" Jan 27 06:58:26 crc kubenswrapper[4872]: I0127 06:58:26.475508 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vvsln","openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 06:58:26 crc kubenswrapper[4872]: I0127 06:58:26.475562 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 06:58:26 crc kubenswrapper[4872]: I0127 06:58:26.483209 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:58:26 crc kubenswrapper[4872]: I0127 06:58:26.501393 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=27.501348662 podStartE2EDuration="27.501348662s" podCreationTimestamp="2026-01-27 06:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:58:26.493942519 +0000 UTC m=+283.021417725" watchObservedRunningTime="2026-01-27 06:58:26.501348662 +0000 UTC m=+283.028823858" Jan 27 06:58:26 crc kubenswrapper[4872]: I0127 06:58:26.510123 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 06:58:26 crc kubenswrapper[4872]: I0127 06:58:26.546521 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 06:58:26 crc kubenswrapper[4872]: I0127 06:58:26.877343 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 06:58:26 crc kubenswrapper[4872]: I0127 06:58:26.882234 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 06:58:27 crc kubenswrapper[4872]: I0127 06:58:27.034706 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 06:58:27 crc kubenswrapper[4872]: I0127 06:58:27.102896 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 06:58:27 crc kubenswrapper[4872]: I0127 06:58:27.463239 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 06:58:28 crc kubenswrapper[4872]: I0127 06:58:28.105605 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31dcc31-ff38-40f9-b26d-fb3757f651c5" path="/var/lib/kubelet/pods/a31dcc31-ff38-40f9-b26d-fb3757f651c5/volumes" Jan 27 06:58:28 crc kubenswrapper[4872]: I0127 06:58:28.276983 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 06:58:28 crc kubenswrapper[4872]: I0127 06:58:28.310076 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 06:58:28 crc kubenswrapper[4872]: I0127 06:58:28.440349 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 06:58:28 crc kubenswrapper[4872]: I0127 06:58:28.572830 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 06:58:29 crc kubenswrapper[4872]: I0127 06:58:29.387828 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 06:58:29 crc kubenswrapper[4872]: I0127 06:58:29.503936 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 06:58:29 crc kubenswrapper[4872]: I0127 06:58:29.511593 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 06:58:30 crc kubenswrapper[4872]: I0127 06:58:30.095455 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 06:58:30 crc kubenswrapper[4872]: I0127 06:58:30.319981 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 06:58:33 crc kubenswrapper[4872]: I0127 06:58:33.603328 4872 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 06:58:33 crc kubenswrapper[4872]: I0127 06:58:33.603949 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://e11c1291b1986cb2f5f9d947fb6a58aa9e0f775fb3adf910babf76fb1872421e" gracePeriod=5 Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.577788 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-fc667b7f-6ptzs"] Jan 27 06:58:34 crc kubenswrapper[4872]: E0127 06:58:34.587117 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.587339 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 06:58:34 crc kubenswrapper[4872]: E0127 06:58:34.587418 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e764f4a-dc17-468f-9f40-da1cd46c4098" containerName="installer" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.587426 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e764f4a-dc17-468f-9f40-da1cd46c4098" containerName="installer" Jan 27 06:58:34 crc kubenswrapper[4872]: E0127 06:58:34.587464 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a31dcc31-ff38-40f9-b26d-fb3757f651c5" containerName="oauth-openshift" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.587472 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="a31dcc31-ff38-40f9-b26d-fb3757f651c5" containerName="oauth-openshift" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.588011 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="a31dcc31-ff38-40f9-b26d-fb3757f651c5" containerName="oauth-openshift" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.588035 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.588048 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e764f4a-dc17-468f-9f40-da1cd46c4098" containerName="installer" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.588781 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.598105 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.598291 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.598423 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.598742 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.598932 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.599160 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.600664 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.606916 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.608218 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.608287 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-fc667b7f-6ptzs"] Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.608349 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.608534 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.608738 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.611470 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.625496 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.634270 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.650235 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/02436a5f-6c88-4e16-beea-fa6f6057cf3e-audit-policies\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.650296 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.650337 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv4wz\" (UniqueName: \"kubernetes.io/projected/02436a5f-6c88-4e16-beea-fa6f6057cf3e-kube-api-access-zv4wz\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.650365 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/02436a5f-6c88-4e16-beea-fa6f6057cf3e-audit-dir\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.650623 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.650736 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-user-template-error\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.650769 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.650798 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-user-template-login\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.650832 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.650897 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-router-certs\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.650958 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.651011 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-session\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.651040 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.651088 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-service-ca\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.752641 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-user-template-error\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.752689 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.752719 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-user-template-login\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.752747 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.752772 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-router-certs\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.752807 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.752858 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-session\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.752882 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.752911 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-service-ca\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.752936 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/02436a5f-6c88-4e16-beea-fa6f6057cf3e-audit-policies\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.752957 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.752982 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv4wz\" (UniqueName: \"kubernetes.io/projected/02436a5f-6c88-4e16-beea-fa6f6057cf3e-kube-api-access-zv4wz\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.753007 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/02436a5f-6c88-4e16-beea-fa6f6057cf3e-audit-dir\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.753052 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.753954 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.754162 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-service-ca\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.754266 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/02436a5f-6c88-4e16-beea-fa6f6057cf3e-audit-dir\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.757232 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/02436a5f-6c88-4e16-beea-fa6f6057cf3e-audit-policies\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.757317 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.759185 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-session\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.759247 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.759864 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.760788 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-user-template-login\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.768230 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.770452 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-router-certs\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.772335 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.773002 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/02436a5f-6c88-4e16-beea-fa6f6057cf3e-v4-0-config-user-template-error\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.774837 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv4wz\" (UniqueName: \"kubernetes.io/projected/02436a5f-6c88-4e16-beea-fa6f6057cf3e-kube-api-access-zv4wz\") pod \"oauth-openshift-fc667b7f-6ptzs\" (UID: \"02436a5f-6c88-4e16-beea-fa6f6057cf3e\") " pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:34 crc kubenswrapper[4872]: I0127 06:58:34.922926 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:35 crc kubenswrapper[4872]: I0127 06:58:35.317606 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-fc667b7f-6ptzs"] Jan 27 06:58:35 crc kubenswrapper[4872]: I0127 06:58:35.979550 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" event={"ID":"02436a5f-6c88-4e16-beea-fa6f6057cf3e","Type":"ContainerStarted","Data":"1fd48b3848e51ee2f2dc19ff6cdaf917585e9c07cb256b85b6114966963f4a55"} Jan 27 06:58:35 crc kubenswrapper[4872]: I0127 06:58:35.979881 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" event={"ID":"02436a5f-6c88-4e16-beea-fa6f6057cf3e","Type":"ContainerStarted","Data":"86be726f09e6a6d820836f2150924fcc1e5a97f8290ed5aeb525821af0be4ba5"} Jan 27 06:58:35 crc kubenswrapper[4872]: I0127 06:58:35.981201 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:35 crc kubenswrapper[4872]: I0127 06:58:35.986504 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" Jan 27 06:58:36 crc kubenswrapper[4872]: I0127 06:58:36.003347 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-fc667b7f-6ptzs" podStartSLOduration=72.003333666 podStartE2EDuration="1m12.003333666s" podCreationTimestamp="2026-01-27 06:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:58:36.001670781 +0000 UTC m=+292.529145977" watchObservedRunningTime="2026-01-27 06:58:36.003333666 +0000 UTC m=+292.530808862" Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.002438 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.002917 4872 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="e11c1291b1986cb2f5f9d947fb6a58aa9e0f775fb3adf910babf76fb1872421e" exitCode=137 Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.201517 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.201600 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.205705 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.205757 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.205784 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.205819 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.205871 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.205922 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.205932 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.206021 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.206040 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.206261 4872 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.206278 4872 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.206290 4872 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.206300 4872 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.218670 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:39 crc kubenswrapper[4872]: I0127 06:58:39.306940 4872 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:40 crc kubenswrapper[4872]: I0127 06:58:40.009348 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 06:58:40 crc kubenswrapper[4872]: I0127 06:58:40.009414 4872 scope.go:117] "RemoveContainer" containerID="e11c1291b1986cb2f5f9d947fb6a58aa9e0f775fb3adf910babf76fb1872421e" Jan 27 06:58:40 crc kubenswrapper[4872]: I0127 06:58:40.009475 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:58:40 crc kubenswrapper[4872]: I0127 06:58:40.104958 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 06:58:40 crc kubenswrapper[4872]: I0127 06:58:40.105401 4872 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 27 06:58:40 crc kubenswrapper[4872]: I0127 06:58:40.119238 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 06:58:40 crc kubenswrapper[4872]: I0127 06:58:40.119270 4872 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="5ca96dcd-eee1-49f0-b721-19470809e169" Jan 27 06:58:40 crc kubenswrapper[4872]: I0127 06:58:40.122760 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 06:58:40 crc kubenswrapper[4872]: I0127 06:58:40.122779 4872 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="5ca96dcd-eee1-49f0-b721-19470809e169" Jan 27 06:58:43 crc kubenswrapper[4872]: I0127 06:58:43.900153 4872 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.717590 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cp9cf"] Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.718721 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.740915 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cp9cf"] Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.891666 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.891724 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4aa45b37-a884-478e-a401-ec48d8f3e80c-registry-tls\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.891746 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4aa45b37-a884-478e-a401-ec48d8f3e80c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.891767 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4aa45b37-a884-478e-a401-ec48d8f3e80c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.891787 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4aa45b37-a884-478e-a401-ec48d8f3e80c-trusted-ca\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.891892 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4aa45b37-a884-478e-a401-ec48d8f3e80c-bound-sa-token\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.891935 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4aa45b37-a884-478e-a401-ec48d8f3e80c-registry-certificates\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.891969 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4r2c\" (UniqueName: \"kubernetes.io/projected/4aa45b37-a884-478e-a401-ec48d8f3e80c-kube-api-access-g4r2c\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.915011 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.993359 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4aa45b37-a884-478e-a401-ec48d8f3e80c-bound-sa-token\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.993440 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4aa45b37-a884-478e-a401-ec48d8f3e80c-registry-certificates\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.993484 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4r2c\" (UniqueName: \"kubernetes.io/projected/4aa45b37-a884-478e-a401-ec48d8f3e80c-kube-api-access-g4r2c\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.993563 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4aa45b37-a884-478e-a401-ec48d8f3e80c-registry-tls\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.993594 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4aa45b37-a884-478e-a401-ec48d8f3e80c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.993630 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4aa45b37-a884-478e-a401-ec48d8f3e80c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.993664 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4aa45b37-a884-478e-a401-ec48d8f3e80c-trusted-ca\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.994541 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4aa45b37-a884-478e-a401-ec48d8f3e80c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.995239 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4aa45b37-a884-478e-a401-ec48d8f3e80c-registry-certificates\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:09 crc kubenswrapper[4872]: I0127 06:59:09.995355 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4aa45b37-a884-478e-a401-ec48d8f3e80c-trusted-ca\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:10 crc kubenswrapper[4872]: I0127 06:59:10.005674 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4aa45b37-a884-478e-a401-ec48d8f3e80c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:10 crc kubenswrapper[4872]: I0127 06:59:10.006225 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4aa45b37-a884-478e-a401-ec48d8f3e80c-registry-tls\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:10 crc kubenswrapper[4872]: I0127 06:59:10.014810 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4r2c\" (UniqueName: \"kubernetes.io/projected/4aa45b37-a884-478e-a401-ec48d8f3e80c-kube-api-access-g4r2c\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:10 crc kubenswrapper[4872]: I0127 06:59:10.016611 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4aa45b37-a884-478e-a401-ec48d8f3e80c-bound-sa-token\") pod \"image-registry-66df7c8f76-cp9cf\" (UID: \"4aa45b37-a884-478e-a401-ec48d8f3e80c\") " pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:10 crc kubenswrapper[4872]: I0127 06:59:10.034802 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:10 crc kubenswrapper[4872]: I0127 06:59:10.477613 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cp9cf"] Jan 27 06:59:11 crc kubenswrapper[4872]: I0127 06:59:11.339438 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" event={"ID":"4aa45b37-a884-478e-a401-ec48d8f3e80c","Type":"ContainerStarted","Data":"bacb43eadf187f852c0e6a5d65cf50d6b772e35295bb6e93437cb09e17ab7aaf"} Jan 27 06:59:11 crc kubenswrapper[4872]: I0127 06:59:11.339787 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:11 crc kubenswrapper[4872]: I0127 06:59:11.339799 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" event={"ID":"4aa45b37-a884-478e-a401-ec48d8f3e80c","Type":"ContainerStarted","Data":"17d2ec402e63ec2ba952a5cc8efb80191743520ad21d5708b7ca8f228c2da2cf"} Jan 27 06:59:11 crc kubenswrapper[4872]: I0127 06:59:11.366496 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" podStartSLOduration=2.366473976 podStartE2EDuration="2.366473976s" podCreationTimestamp="2026-01-27 06:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:59:11.363467253 +0000 UTC m=+327.890942459" watchObservedRunningTime="2026-01-27 06:59:11.366473976 +0000 UTC m=+327.893949182" Jan 27 06:59:15 crc kubenswrapper[4872]: I0127 06:59:15.842080 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hthrz"] Jan 27 06:59:15 crc kubenswrapper[4872]: I0127 06:59:15.842907 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hthrz" podUID="a605f814-0bb0-4f00-88c9-57a7c47f7ece" containerName="registry-server" containerID="cri-o://7ce2b73c8ea5c00d8dde397f7f6242a6049fe22f7d4aec243b70e06141708030" gracePeriod=30 Jan 27 06:59:15 crc kubenswrapper[4872]: I0127 06:59:15.852303 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9k9dq"] Jan 27 06:59:15 crc kubenswrapper[4872]: I0127 06:59:15.852571 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9k9dq" podUID="22382085-00d5-42cd-97fb-098b131498d6" containerName="registry-server" containerID="cri-o://36bb9be558e24f9ebbee1fc1799760e10926d52aa68f6d504423e4dfd5a3a628" gracePeriod=30 Jan 27 06:59:15 crc kubenswrapper[4872]: I0127 06:59:15.865310 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lxtl"] Jan 27 06:59:15 crc kubenswrapper[4872]: I0127 06:59:15.865523 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" podUID="3f173d12-704f-4431-aa56-05841c81d146" containerName="marketplace-operator" containerID="cri-o://776c5e3afacd27f953b84fb3cf133be94e7ac60aa42857d5f223c69411c79866" gracePeriod=30 Jan 27 06:59:15 crc kubenswrapper[4872]: I0127 06:59:15.877601 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pp8h2"] Jan 27 06:59:15 crc kubenswrapper[4872]: I0127 06:59:15.878011 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pp8h2" podUID="59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" containerName="registry-server" containerID="cri-o://4413e007d6b0c1cc1b0a644518d1bcacbd300d1cb9bce5cd54a4f908e33dd1a4" gracePeriod=30 Jan 27 06:59:15 crc kubenswrapper[4872]: I0127 06:59:15.888567 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8r2pn"] Jan 27 06:59:15 crc kubenswrapper[4872]: I0127 06:59:15.888813 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8r2pn" podUID="d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" containerName="registry-server" containerID="cri-o://190507941f87e8fd66043293dcc43f217259a962b1eb685cc0e55717cb808d05" gracePeriod=30 Jan 27 06:59:15 crc kubenswrapper[4872]: I0127 06:59:15.911642 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xc2lz"] Jan 27 06:59:15 crc kubenswrapper[4872]: I0127 06:59:15.912808 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xc2lz" Jan 27 06:59:15 crc kubenswrapper[4872]: I0127 06:59:15.926517 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xc2lz"] Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.072883 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvh4n\" (UniqueName: \"kubernetes.io/projected/48d4741c-2022-489f-b068-dfa9ab32498a-kube-api-access-zvh4n\") pod \"marketplace-operator-79b997595-xc2lz\" (UID: \"48d4741c-2022-489f-b068-dfa9ab32498a\") " pod="openshift-marketplace/marketplace-operator-79b997595-xc2lz" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.073205 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48d4741c-2022-489f-b068-dfa9ab32498a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xc2lz\" (UID: \"48d4741c-2022-489f-b068-dfa9ab32498a\") " pod="openshift-marketplace/marketplace-operator-79b997595-xc2lz" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.073256 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48d4741c-2022-489f-b068-dfa9ab32498a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xc2lz\" (UID: \"48d4741c-2022-489f-b068-dfa9ab32498a\") " pod="openshift-marketplace/marketplace-operator-79b997595-xc2lz" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.174699 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48d4741c-2022-489f-b068-dfa9ab32498a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xc2lz\" (UID: \"48d4741c-2022-489f-b068-dfa9ab32498a\") " pod="openshift-marketplace/marketplace-operator-79b997595-xc2lz" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.174778 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48d4741c-2022-489f-b068-dfa9ab32498a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xc2lz\" (UID: \"48d4741c-2022-489f-b068-dfa9ab32498a\") " pod="openshift-marketplace/marketplace-operator-79b997595-xc2lz" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.174831 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvh4n\" (UniqueName: \"kubernetes.io/projected/48d4741c-2022-489f-b068-dfa9ab32498a-kube-api-access-zvh4n\") pod \"marketplace-operator-79b997595-xc2lz\" (UID: \"48d4741c-2022-489f-b068-dfa9ab32498a\") " pod="openshift-marketplace/marketplace-operator-79b997595-xc2lz" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.177467 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48d4741c-2022-489f-b068-dfa9ab32498a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xc2lz\" (UID: \"48d4741c-2022-489f-b068-dfa9ab32498a\") " pod="openshift-marketplace/marketplace-operator-79b997595-xc2lz" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.199785 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48d4741c-2022-489f-b068-dfa9ab32498a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xc2lz\" (UID: \"48d4741c-2022-489f-b068-dfa9ab32498a\") " pod="openshift-marketplace/marketplace-operator-79b997595-xc2lz" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.219424 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvh4n\" (UniqueName: \"kubernetes.io/projected/48d4741c-2022-489f-b068-dfa9ab32498a-kube-api-access-zvh4n\") pod \"marketplace-operator-79b997595-xc2lz\" (UID: \"48d4741c-2022-489f-b068-dfa9ab32498a\") " pod="openshift-marketplace/marketplace-operator-79b997595-xc2lz" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.248725 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xc2lz" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.311835 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.382391 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f173d12-704f-4431-aa56-05841c81d146-marketplace-trusted-ca\") pod \"3f173d12-704f-4431-aa56-05841c81d146\" (UID: \"3f173d12-704f-4431-aa56-05841c81d146\") " Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.382437 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f173d12-704f-4431-aa56-05841c81d146-marketplace-operator-metrics\") pod \"3f173d12-704f-4431-aa56-05841c81d146\" (UID: \"3f173d12-704f-4431-aa56-05841c81d146\") " Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.382475 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9chd\" (UniqueName: \"kubernetes.io/projected/3f173d12-704f-4431-aa56-05841c81d146-kube-api-access-s9chd\") pod \"3f173d12-704f-4431-aa56-05841c81d146\" (UID: \"3f173d12-704f-4431-aa56-05841c81d146\") " Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.383533 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f173d12-704f-4431-aa56-05841c81d146-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3f173d12-704f-4431-aa56-05841c81d146" (UID: "3f173d12-704f-4431-aa56-05841c81d146"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.384521 4872 generic.go:334] "Generic (PLEG): container finished" podID="22382085-00d5-42cd-97fb-098b131498d6" containerID="36bb9be558e24f9ebbee1fc1799760e10926d52aa68f6d504423e4dfd5a3a628" exitCode=0 Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.384619 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k9dq" event={"ID":"22382085-00d5-42cd-97fb-098b131498d6","Type":"ContainerDied","Data":"36bb9be558e24f9ebbee1fc1799760e10926d52aa68f6d504423e4dfd5a3a628"} Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.387417 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f173d12-704f-4431-aa56-05841c81d146-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3f173d12-704f-4431-aa56-05841c81d146" (UID: "3f173d12-704f-4431-aa56-05841c81d146"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.388092 4872 generic.go:334] "Generic (PLEG): container finished" podID="3f173d12-704f-4431-aa56-05841c81d146" containerID="776c5e3afacd27f953b84fb3cf133be94e7ac60aa42857d5f223c69411c79866" exitCode=0 Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.388159 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" event={"ID":"3f173d12-704f-4431-aa56-05841c81d146","Type":"ContainerDied","Data":"776c5e3afacd27f953b84fb3cf133be94e7ac60aa42857d5f223c69411c79866"} Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.388186 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" event={"ID":"3f173d12-704f-4431-aa56-05841c81d146","Type":"ContainerDied","Data":"3e427632378313e81ede5b79f57d46cf78e24920a0878ae58f1c486c7a508a64"} Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.388205 4872 scope.go:117] "RemoveContainer" containerID="776c5e3afacd27f953b84fb3cf133be94e7ac60aa42857d5f223c69411c79866" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.388333 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.394066 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f173d12-704f-4431-aa56-05841c81d146-kube-api-access-s9chd" (OuterVolumeSpecName: "kube-api-access-s9chd") pod "3f173d12-704f-4431-aa56-05841c81d146" (UID: "3f173d12-704f-4431-aa56-05841c81d146"). InnerVolumeSpecName "kube-api-access-s9chd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.409161 4872 generic.go:334] "Generic (PLEG): container finished" podID="a605f814-0bb0-4f00-88c9-57a7c47f7ece" containerID="7ce2b73c8ea5c00d8dde397f7f6242a6049fe22f7d4aec243b70e06141708030" exitCode=0 Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.409223 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthrz" event={"ID":"a605f814-0bb0-4f00-88c9-57a7c47f7ece","Type":"ContainerDied","Data":"7ce2b73c8ea5c00d8dde397f7f6242a6049fe22f7d4aec243b70e06141708030"} Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.428203 4872 scope.go:117] "RemoveContainer" containerID="776c5e3afacd27f953b84fb3cf133be94e7ac60aa42857d5f223c69411c79866" Jan 27 06:59:16 crc kubenswrapper[4872]: E0127 06:59:16.428997 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"776c5e3afacd27f953b84fb3cf133be94e7ac60aa42857d5f223c69411c79866\": container with ID starting with 776c5e3afacd27f953b84fb3cf133be94e7ac60aa42857d5f223c69411c79866 not found: ID does not exist" containerID="776c5e3afacd27f953b84fb3cf133be94e7ac60aa42857d5f223c69411c79866" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.429039 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"776c5e3afacd27f953b84fb3cf133be94e7ac60aa42857d5f223c69411c79866"} err="failed to get container status \"776c5e3afacd27f953b84fb3cf133be94e7ac60aa42857d5f223c69411c79866\": rpc error: code = NotFound desc = could not find container \"776c5e3afacd27f953b84fb3cf133be94e7ac60aa42857d5f223c69411c79866\": container with ID starting with 776c5e3afacd27f953b84fb3cf133be94e7ac60aa42857d5f223c69411c79866 not found: ID does not exist" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.431935 4872 generic.go:334] "Generic (PLEG): container finished" podID="59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" containerID="4413e007d6b0c1cc1b0a644518d1bcacbd300d1cb9bce5cd54a4f908e33dd1a4" exitCode=0 Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.432023 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pp8h2" event={"ID":"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b","Type":"ContainerDied","Data":"4413e007d6b0c1cc1b0a644518d1bcacbd300d1cb9bce5cd54a4f908e33dd1a4"} Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.434399 4872 generic.go:334] "Generic (PLEG): container finished" podID="d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" containerID="190507941f87e8fd66043293dcc43f217259a962b1eb685cc0e55717cb808d05" exitCode=0 Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.434431 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r2pn" event={"ID":"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9","Type":"ContainerDied","Data":"190507941f87e8fd66043293dcc43f217259a962b1eb685cc0e55717cb808d05"} Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.484076 4872 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f173d12-704f-4431-aa56-05841c81d146-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.484376 4872 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f173d12-704f-4431-aa56-05841c81d146-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.484389 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9chd\" (UniqueName: \"kubernetes.io/projected/3f173d12-704f-4431-aa56-05841c81d146-kube-api-access-s9chd\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:16 crc kubenswrapper[4872]: E0127 06:59:16.586508 4872 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4413e007d6b0c1cc1b0a644518d1bcacbd300d1cb9bce5cd54a4f908e33dd1a4 is running failed: container process not found" containerID="4413e007d6b0c1cc1b0a644518d1bcacbd300d1cb9bce5cd54a4f908e33dd1a4" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 06:59:16 crc kubenswrapper[4872]: E0127 06:59:16.586924 4872 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4413e007d6b0c1cc1b0a644518d1bcacbd300d1cb9bce5cd54a4f908e33dd1a4 is running failed: container process not found" containerID="4413e007d6b0c1cc1b0a644518d1bcacbd300d1cb9bce5cd54a4f908e33dd1a4" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 06:59:16 crc kubenswrapper[4872]: E0127 06:59:16.588098 4872 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4413e007d6b0c1cc1b0a644518d1bcacbd300d1cb9bce5cd54a4f908e33dd1a4 is running failed: container process not found" containerID="4413e007d6b0c1cc1b0a644518d1bcacbd300d1cb9bce5cd54a4f908e33dd1a4" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 06:59:16 crc kubenswrapper[4872]: E0127 06:59:16.588168 4872 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4413e007d6b0c1cc1b0a644518d1bcacbd300d1cb9bce5cd54a4f908e33dd1a4 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-pp8h2" podUID="59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" containerName="registry-server" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.633602 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.651436 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.667378 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.688561 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcc5t\" (UniqueName: \"kubernetes.io/projected/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-kube-api-access-vcc5t\") pod \"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b\" (UID: \"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b\") " Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.688616 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a605f814-0bb0-4f00-88c9-57a7c47f7ece-utilities\") pod \"a605f814-0bb0-4f00-88c9-57a7c47f7ece\" (UID: \"a605f814-0bb0-4f00-88c9-57a7c47f7ece\") " Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.688639 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-catalog-content\") pod \"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9\" (UID: \"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9\") " Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.688668 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-utilities\") pod \"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b\" (UID: \"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b\") " Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.688756 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmpt9\" (UniqueName: \"kubernetes.io/projected/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-kube-api-access-tmpt9\") pod \"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9\" (UID: \"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9\") " Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.688793 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-catalog-content\") pod \"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b\" (UID: \"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b\") " Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.688826 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a605f814-0bb0-4f00-88c9-57a7c47f7ece-catalog-content\") pod \"a605f814-0bb0-4f00-88c9-57a7c47f7ece\" (UID: \"a605f814-0bb0-4f00-88c9-57a7c47f7ece\") " Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.688925 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggk9r\" (UniqueName: \"kubernetes.io/projected/a605f814-0bb0-4f00-88c9-57a7c47f7ece-kube-api-access-ggk9r\") pod \"a605f814-0bb0-4f00-88c9-57a7c47f7ece\" (UID: \"a605f814-0bb0-4f00-88c9-57a7c47f7ece\") " Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.688963 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-utilities\") pod \"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9\" (UID: \"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9\") " Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.689825 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a605f814-0bb0-4f00-88c9-57a7c47f7ece-utilities" (OuterVolumeSpecName: "utilities") pod "a605f814-0bb0-4f00-88c9-57a7c47f7ece" (UID: "a605f814-0bb0-4f00-88c9-57a7c47f7ece"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.692712 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-utilities" (OuterVolumeSpecName: "utilities") pod "59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" (UID: "59197c9d-c3b2-4ca8-875b-f1264d3b8d7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.694165 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-utilities" (OuterVolumeSpecName: "utilities") pod "d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" (UID: "d6feabd3-81a6-4ef5-b7a7-5df917fefcc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.706795 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-kube-api-access-tmpt9" (OuterVolumeSpecName: "kube-api-access-tmpt9") pod "d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" (UID: "d6feabd3-81a6-4ef5-b7a7-5df917fefcc9"). InnerVolumeSpecName "kube-api-access-tmpt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.713745 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a605f814-0bb0-4f00-88c9-57a7c47f7ece-kube-api-access-ggk9r" (OuterVolumeSpecName: "kube-api-access-ggk9r") pod "a605f814-0bb0-4f00-88c9-57a7c47f7ece" (UID: "a605f814-0bb0-4f00-88c9-57a7c47f7ece"). InnerVolumeSpecName "kube-api-access-ggk9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.729827 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-kube-api-access-vcc5t" (OuterVolumeSpecName: "kube-api-access-vcc5t") pod "59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" (UID: "59197c9d-c3b2-4ca8-875b-f1264d3b8d7b"). InnerVolumeSpecName "kube-api-access-vcc5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.735901 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" (UID: "59197c9d-c3b2-4ca8-875b-f1264d3b8d7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.775261 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a605f814-0bb0-4f00-88c9-57a7c47f7ece-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a605f814-0bb0-4f00-88c9-57a7c47f7ece" (UID: "a605f814-0bb0-4f00-88c9-57a7c47f7ece"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.775923 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lxtl"] Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.778681 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4lxtl"] Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.790431 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmpt9\" (UniqueName: \"kubernetes.io/projected/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-kube-api-access-tmpt9\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.790486 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.790495 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a605f814-0bb0-4f00-88c9-57a7c47f7ece-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.790505 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggk9r\" (UniqueName: \"kubernetes.io/projected/a605f814-0bb0-4f00-88c9-57a7c47f7ece-kube-api-access-ggk9r\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.790514 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.790523 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcc5t\" (UniqueName: \"kubernetes.io/projected/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-kube-api-access-vcc5t\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.790531 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a605f814-0bb0-4f00-88c9-57a7c47f7ece-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.790540 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.862257 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" (UID: "d6feabd3-81a6-4ef5-b7a7-5df917fefcc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.886623 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xc2lz"] Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.891497 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:16 crc kubenswrapper[4872]: I0127 06:59:16.953629 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.094298 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8pkc\" (UniqueName: \"kubernetes.io/projected/22382085-00d5-42cd-97fb-098b131498d6-kube-api-access-n8pkc\") pod \"22382085-00d5-42cd-97fb-098b131498d6\" (UID: \"22382085-00d5-42cd-97fb-098b131498d6\") " Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.094664 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22382085-00d5-42cd-97fb-098b131498d6-utilities\") pod \"22382085-00d5-42cd-97fb-098b131498d6\" (UID: \"22382085-00d5-42cd-97fb-098b131498d6\") " Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.095531 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22382085-00d5-42cd-97fb-098b131498d6-utilities" (OuterVolumeSpecName: "utilities") pod "22382085-00d5-42cd-97fb-098b131498d6" (UID: "22382085-00d5-42cd-97fb-098b131498d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.095608 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22382085-00d5-42cd-97fb-098b131498d6-catalog-content\") pod \"22382085-00d5-42cd-97fb-098b131498d6\" (UID: \"22382085-00d5-42cd-97fb-098b131498d6\") " Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.103001 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22382085-00d5-42cd-97fb-098b131498d6-kube-api-access-n8pkc" (OuterVolumeSpecName: "kube-api-access-n8pkc") pod "22382085-00d5-42cd-97fb-098b131498d6" (UID: "22382085-00d5-42cd-97fb-098b131498d6"). InnerVolumeSpecName "kube-api-access-n8pkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.108273 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8pkc\" (UniqueName: \"kubernetes.io/projected/22382085-00d5-42cd-97fb-098b131498d6-kube-api-access-n8pkc\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.108299 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22382085-00d5-42cd-97fb-098b131498d6-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.167966 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22382085-00d5-42cd-97fb-098b131498d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22382085-00d5-42cd-97fb-098b131498d6" (UID: "22382085-00d5-42cd-97fb-098b131498d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.180367 4872 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4lxtl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.180420 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4lxtl" podUID="3f173d12-704f-4431-aa56-05841c81d146" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.209310 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22382085-00d5-42cd-97fb-098b131498d6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.440717 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k9dq" event={"ID":"22382085-00d5-42cd-97fb-098b131498d6","Type":"ContainerDied","Data":"061cd47d145e63a89b484d6b8545726e6cf496ca2c7ed6c65c0aa24fabbf2ba9"} Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.440768 4872 scope.go:117] "RemoveContainer" containerID="36bb9be558e24f9ebbee1fc1799760e10926d52aa68f6d504423e4dfd5a3a628" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.440741 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k9dq" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.442374 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xc2lz" event={"ID":"48d4741c-2022-489f-b068-dfa9ab32498a","Type":"ContainerStarted","Data":"cfe329f3039e20cd7d47494910554bcaa23be63fa68e14068d0dcb130d05b984"} Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.442398 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xc2lz" event={"ID":"48d4741c-2022-489f-b068-dfa9ab32498a","Type":"ContainerStarted","Data":"37e613423ae23ae5a9108e0d8e37cbef0f6536806135a76f455be681c646aef5"} Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.442944 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xc2lz" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.447332 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthrz" event={"ID":"a605f814-0bb0-4f00-88c9-57a7c47f7ece","Type":"ContainerDied","Data":"26ebcb09c7bcf30d6dfa858c61e2b9a064bc262c29e6d7b4ec95119b6fc44d09"} Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.447410 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hthrz" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.452105 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pp8h2" event={"ID":"59197c9d-c3b2-4ca8-875b-f1264d3b8d7b","Type":"ContainerDied","Data":"1a88007679c4ab55d2b11af03c17ffb9a918bf4c7c43b716a2049b1159321378"} Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.452118 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pp8h2" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.455681 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8r2pn" event={"ID":"d6feabd3-81a6-4ef5-b7a7-5df917fefcc9","Type":"ContainerDied","Data":"e740d8c38db9569764f44b641336932c9cbb17c42c392a7d737c52cb8d70fb08"} Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.455871 4872 scope.go:117] "RemoveContainer" containerID="b6153cf7751dea6874b3b5934378533565c3adef32a86d2c33955cfca9c91552" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.456141 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8r2pn" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.466820 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xc2lz" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.474530 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xc2lz" podStartSLOduration=2.474513914 podStartE2EDuration="2.474513914s" podCreationTimestamp="2026-01-27 06:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:59:17.470566995 +0000 UTC m=+333.998042191" watchObservedRunningTime="2026-01-27 06:59:17.474513914 +0000 UTC m=+334.001989110" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.497504 4872 scope.go:117] "RemoveContainer" containerID="6cd5a4e3c9a141c32577105f0398b99c7cc4095819e87dd7acf4d2bd5f7882ea" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.518591 4872 scope.go:117] "RemoveContainer" containerID="7ce2b73c8ea5c00d8dde397f7f6242a6049fe22f7d4aec243b70e06141708030" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.553196 4872 scope.go:117] "RemoveContainer" containerID="768ee19f12b3d4763ecf9a8f805ddd30c386c5b3c453872265c55bac6b40150f" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.555946 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9k9dq"] Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.559464 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9k9dq"] Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.578911 4872 scope.go:117] "RemoveContainer" containerID="a28d382974c113924e6991f5ef06bd8982d7c525f1cfccb4b42e3776c1766c6c" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.579828 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hthrz"] Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.584944 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hthrz"] Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.594810 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pp8h2"] Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.598597 4872 scope.go:117] "RemoveContainer" containerID="4413e007d6b0c1cc1b0a644518d1bcacbd300d1cb9bce5cd54a4f908e33dd1a4" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.600095 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pp8h2"] Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.604019 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8r2pn"] Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.606944 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8r2pn"] Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.614930 4872 scope.go:117] "RemoveContainer" containerID="ff3c69af608b565804e0a6ede206405bd1749b1b261ee2a854032a93ce9cc43d" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.627298 4872 scope.go:117] "RemoveContainer" containerID="4aa2f49072a3eaa9d258f84ab48a0273b5d8b99e5f819a9a55569b06874810ff" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.639375 4872 scope.go:117] "RemoveContainer" containerID="190507941f87e8fd66043293dcc43f217259a962b1eb685cc0e55717cb808d05" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.665027 4872 scope.go:117] "RemoveContainer" containerID="78c5ed6b5039c9a8fc737551ec70d70b01f8d2dc98a6c13e53fd079644a7731c" Jan 27 06:59:17 crc kubenswrapper[4872]: I0127 06:59:17.691397 4872 scope.go:117] "RemoveContainer" containerID="e7d5a61efb776a101cb7fd0503189524fdf2582fb9dbba30afe5cfaabe7c4e93" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.104395 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22382085-00d5-42cd-97fb-098b131498d6" path="/var/lib/kubelet/pods/22382085-00d5-42cd-97fb-098b131498d6/volumes" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.105589 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f173d12-704f-4431-aa56-05841c81d146" path="/var/lib/kubelet/pods/3f173d12-704f-4431-aa56-05841c81d146/volumes" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.106045 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" path="/var/lib/kubelet/pods/59197c9d-c3b2-4ca8-875b-f1264d3b8d7b/volumes" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.107077 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a605f814-0bb0-4f00-88c9-57a7c47f7ece" path="/var/lib/kubelet/pods/a605f814-0bb0-4f00-88c9-57a7c47f7ece/volumes" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.107798 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" path="/var/lib/kubelet/pods/d6feabd3-81a6-4ef5-b7a7-5df917fefcc9/volumes" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288128 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5lb59"] Jan 27 06:59:18 crc kubenswrapper[4872]: E0127 06:59:18.288336 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" containerName="extract-utilities" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288348 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" containerName="extract-utilities" Jan 27 06:59:18 crc kubenswrapper[4872]: E0127 06:59:18.288359 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22382085-00d5-42cd-97fb-098b131498d6" containerName="extract-content" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288365 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="22382085-00d5-42cd-97fb-098b131498d6" containerName="extract-content" Jan 27 06:59:18 crc kubenswrapper[4872]: E0127 06:59:18.288374 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" containerName="extract-utilities" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288382 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" containerName="extract-utilities" Jan 27 06:59:18 crc kubenswrapper[4872]: E0127 06:59:18.288391 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f173d12-704f-4431-aa56-05841c81d146" containerName="marketplace-operator" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288396 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f173d12-704f-4431-aa56-05841c81d146" containerName="marketplace-operator" Jan 27 06:59:18 crc kubenswrapper[4872]: E0127 06:59:18.288404 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22382085-00d5-42cd-97fb-098b131498d6" containerName="registry-server" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288409 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="22382085-00d5-42cd-97fb-098b131498d6" containerName="registry-server" Jan 27 06:59:18 crc kubenswrapper[4872]: E0127 06:59:18.288419 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a605f814-0bb0-4f00-88c9-57a7c47f7ece" containerName="extract-utilities" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288425 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="a605f814-0bb0-4f00-88c9-57a7c47f7ece" containerName="extract-utilities" Jan 27 06:59:18 crc kubenswrapper[4872]: E0127 06:59:18.288437 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" containerName="registry-server" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288443 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" containerName="registry-server" Jan 27 06:59:18 crc kubenswrapper[4872]: E0127 06:59:18.288484 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" containerName="extract-content" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288490 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" containerName="extract-content" Jan 27 06:59:18 crc kubenswrapper[4872]: E0127 06:59:18.288497 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a605f814-0bb0-4f00-88c9-57a7c47f7ece" containerName="registry-server" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288502 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="a605f814-0bb0-4f00-88c9-57a7c47f7ece" containerName="registry-server" Jan 27 06:59:18 crc kubenswrapper[4872]: E0127 06:59:18.288510 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a605f814-0bb0-4f00-88c9-57a7c47f7ece" containerName="extract-content" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288517 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="a605f814-0bb0-4f00-88c9-57a7c47f7ece" containerName="extract-content" Jan 27 06:59:18 crc kubenswrapper[4872]: E0127 06:59:18.288525 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" containerName="extract-content" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288531 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" containerName="extract-content" Jan 27 06:59:18 crc kubenswrapper[4872]: E0127 06:59:18.288540 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" containerName="registry-server" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288545 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" containerName="registry-server" Jan 27 06:59:18 crc kubenswrapper[4872]: E0127 06:59:18.288553 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22382085-00d5-42cd-97fb-098b131498d6" containerName="extract-utilities" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288559 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="22382085-00d5-42cd-97fb-098b131498d6" containerName="extract-utilities" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288645 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6feabd3-81a6-4ef5-b7a7-5df917fefcc9" containerName="registry-server" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288655 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="22382085-00d5-42cd-97fb-098b131498d6" containerName="registry-server" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288665 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f173d12-704f-4431-aa56-05841c81d146" containerName="marketplace-operator" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288674 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="a605f814-0bb0-4f00-88c9-57a7c47f7ece" containerName="registry-server" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.288681 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="59197c9d-c3b2-4ca8-875b-f1264d3b8d7b" containerName="registry-server" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.289407 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5lb59" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.292581 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.295614 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lb59"] Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.326831 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc57058f-3a9e-42fb-ae07-53e246ed8fc2-catalog-content\") pod \"redhat-marketplace-5lb59\" (UID: \"dc57058f-3a9e-42fb-ae07-53e246ed8fc2\") " pod="openshift-marketplace/redhat-marketplace-5lb59" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.326956 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc57058f-3a9e-42fb-ae07-53e246ed8fc2-utilities\") pod \"redhat-marketplace-5lb59\" (UID: \"dc57058f-3a9e-42fb-ae07-53e246ed8fc2\") " pod="openshift-marketplace/redhat-marketplace-5lb59" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.326980 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tmn7\" (UniqueName: \"kubernetes.io/projected/dc57058f-3a9e-42fb-ae07-53e246ed8fc2-kube-api-access-7tmn7\") pod \"redhat-marketplace-5lb59\" (UID: \"dc57058f-3a9e-42fb-ae07-53e246ed8fc2\") " pod="openshift-marketplace/redhat-marketplace-5lb59" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.428341 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc57058f-3a9e-42fb-ae07-53e246ed8fc2-utilities\") pod \"redhat-marketplace-5lb59\" (UID: \"dc57058f-3a9e-42fb-ae07-53e246ed8fc2\") " pod="openshift-marketplace/redhat-marketplace-5lb59" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.428407 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tmn7\" (UniqueName: \"kubernetes.io/projected/dc57058f-3a9e-42fb-ae07-53e246ed8fc2-kube-api-access-7tmn7\") pod \"redhat-marketplace-5lb59\" (UID: \"dc57058f-3a9e-42fb-ae07-53e246ed8fc2\") " pod="openshift-marketplace/redhat-marketplace-5lb59" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.428491 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc57058f-3a9e-42fb-ae07-53e246ed8fc2-catalog-content\") pod \"redhat-marketplace-5lb59\" (UID: \"dc57058f-3a9e-42fb-ae07-53e246ed8fc2\") " pod="openshift-marketplace/redhat-marketplace-5lb59" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.429042 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc57058f-3a9e-42fb-ae07-53e246ed8fc2-utilities\") pod \"redhat-marketplace-5lb59\" (UID: \"dc57058f-3a9e-42fb-ae07-53e246ed8fc2\") " pod="openshift-marketplace/redhat-marketplace-5lb59" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.429203 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc57058f-3a9e-42fb-ae07-53e246ed8fc2-catalog-content\") pod \"redhat-marketplace-5lb59\" (UID: \"dc57058f-3a9e-42fb-ae07-53e246ed8fc2\") " pod="openshift-marketplace/redhat-marketplace-5lb59" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.445349 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tmn7\" (UniqueName: \"kubernetes.io/projected/dc57058f-3a9e-42fb-ae07-53e246ed8fc2-kube-api-access-7tmn7\") pod \"redhat-marketplace-5lb59\" (UID: \"dc57058f-3a9e-42fb-ae07-53e246ed8fc2\") " pod="openshift-marketplace/redhat-marketplace-5lb59" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.488899 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sq6g2"] Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.490287 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sq6g2" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.494009 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.501784 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sq6g2"] Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.529505 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059bb67f-38aa-492a-8d62-cfc3d3efc41d-catalog-content\") pod \"redhat-operators-sq6g2\" (UID: \"059bb67f-38aa-492a-8d62-cfc3d3efc41d\") " pod="openshift-marketplace/redhat-operators-sq6g2" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.529589 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dbnc\" (UniqueName: \"kubernetes.io/projected/059bb67f-38aa-492a-8d62-cfc3d3efc41d-kube-api-access-6dbnc\") pod \"redhat-operators-sq6g2\" (UID: \"059bb67f-38aa-492a-8d62-cfc3d3efc41d\") " pod="openshift-marketplace/redhat-operators-sq6g2" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.529644 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059bb67f-38aa-492a-8d62-cfc3d3efc41d-utilities\") pod \"redhat-operators-sq6g2\" (UID: \"059bb67f-38aa-492a-8d62-cfc3d3efc41d\") " pod="openshift-marketplace/redhat-operators-sq6g2" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.611887 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5lb59" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.632490 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059bb67f-38aa-492a-8d62-cfc3d3efc41d-utilities\") pod \"redhat-operators-sq6g2\" (UID: \"059bb67f-38aa-492a-8d62-cfc3d3efc41d\") " pod="openshift-marketplace/redhat-operators-sq6g2" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.632555 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059bb67f-38aa-492a-8d62-cfc3d3efc41d-catalog-content\") pod \"redhat-operators-sq6g2\" (UID: \"059bb67f-38aa-492a-8d62-cfc3d3efc41d\") " pod="openshift-marketplace/redhat-operators-sq6g2" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.632608 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dbnc\" (UniqueName: \"kubernetes.io/projected/059bb67f-38aa-492a-8d62-cfc3d3efc41d-kube-api-access-6dbnc\") pod \"redhat-operators-sq6g2\" (UID: \"059bb67f-38aa-492a-8d62-cfc3d3efc41d\") " pod="openshift-marketplace/redhat-operators-sq6g2" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.633323 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059bb67f-38aa-492a-8d62-cfc3d3efc41d-catalog-content\") pod \"redhat-operators-sq6g2\" (UID: \"059bb67f-38aa-492a-8d62-cfc3d3efc41d\") " pod="openshift-marketplace/redhat-operators-sq6g2" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.633782 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059bb67f-38aa-492a-8d62-cfc3d3efc41d-utilities\") pod \"redhat-operators-sq6g2\" (UID: \"059bb67f-38aa-492a-8d62-cfc3d3efc41d\") " pod="openshift-marketplace/redhat-operators-sq6g2" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.651931 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dbnc\" (UniqueName: \"kubernetes.io/projected/059bb67f-38aa-492a-8d62-cfc3d3efc41d-kube-api-access-6dbnc\") pod \"redhat-operators-sq6g2\" (UID: \"059bb67f-38aa-492a-8d62-cfc3d3efc41d\") " pod="openshift-marketplace/redhat-operators-sq6g2" Jan 27 06:59:18 crc kubenswrapper[4872]: I0127 06:59:18.810576 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sq6g2" Jan 27 06:59:19 crc kubenswrapper[4872]: W0127 06:59:19.027949 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc57058f_3a9e_42fb_ae07_53e246ed8fc2.slice/crio-2bba23160e9b508f1f8152cbdd541a73b75257a4153959cb3c0dedb37f69eb17 WatchSource:0}: Error finding container 2bba23160e9b508f1f8152cbdd541a73b75257a4153959cb3c0dedb37f69eb17: Status 404 returned error can't find the container with id 2bba23160e9b508f1f8152cbdd541a73b75257a4153959cb3c0dedb37f69eb17 Jan 27 06:59:19 crc kubenswrapper[4872]: I0127 06:59:19.030884 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lb59"] Jan 27 06:59:19 crc kubenswrapper[4872]: I0127 06:59:19.233221 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sq6g2"] Jan 27 06:59:19 crc kubenswrapper[4872]: I0127 06:59:19.491275 4872 generic.go:334] "Generic (PLEG): container finished" podID="dc57058f-3a9e-42fb-ae07-53e246ed8fc2" containerID="e35c81db983bf137cd3eb771c670c695a2bbbe9d14af71c8e7b021c2c06299a7" exitCode=0 Jan 27 06:59:19 crc kubenswrapper[4872]: I0127 06:59:19.491364 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lb59" event={"ID":"dc57058f-3a9e-42fb-ae07-53e246ed8fc2","Type":"ContainerDied","Data":"e35c81db983bf137cd3eb771c670c695a2bbbe9d14af71c8e7b021c2c06299a7"} Jan 27 06:59:19 crc kubenswrapper[4872]: I0127 06:59:19.491423 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lb59" event={"ID":"dc57058f-3a9e-42fb-ae07-53e246ed8fc2","Type":"ContainerStarted","Data":"2bba23160e9b508f1f8152cbdd541a73b75257a4153959cb3c0dedb37f69eb17"} Jan 27 06:59:19 crc kubenswrapper[4872]: I0127 06:59:19.493443 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sq6g2" event={"ID":"059bb67f-38aa-492a-8d62-cfc3d3efc41d","Type":"ContainerStarted","Data":"b0f2d78788544109157881df8e00aaee0fa09731b1da63ca148ac524b0240569"} Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.498783 4872 generic.go:334] "Generic (PLEG): container finished" podID="059bb67f-38aa-492a-8d62-cfc3d3efc41d" containerID="270ec2070a4bf4dd3a2aa2caf6ba883ea38a8a7bfc4f988a5b89d784c038a8c6" exitCode=0 Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.498885 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sq6g2" event={"ID":"059bb67f-38aa-492a-8d62-cfc3d3efc41d","Type":"ContainerDied","Data":"270ec2070a4bf4dd3a2aa2caf6ba883ea38a8a7bfc4f988a5b89d784c038a8c6"} Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.684876 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zzwgm"] Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.686394 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzwgm" Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.688455 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.697108 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zzwgm"] Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.756954 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkxj9\" (UniqueName: \"kubernetes.io/projected/788f4442-73e7-486a-b79c-83560f1c7cc3-kube-api-access-hkxj9\") pod \"certified-operators-zzwgm\" (UID: \"788f4442-73e7-486a-b79c-83560f1c7cc3\") " pod="openshift-marketplace/certified-operators-zzwgm" Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.757025 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/788f4442-73e7-486a-b79c-83560f1c7cc3-utilities\") pod \"certified-operators-zzwgm\" (UID: \"788f4442-73e7-486a-b79c-83560f1c7cc3\") " pod="openshift-marketplace/certified-operators-zzwgm" Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.757167 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/788f4442-73e7-486a-b79c-83560f1c7cc3-catalog-content\") pod \"certified-operators-zzwgm\" (UID: \"788f4442-73e7-486a-b79c-83560f1c7cc3\") " pod="openshift-marketplace/certified-operators-zzwgm" Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.858716 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/788f4442-73e7-486a-b79c-83560f1c7cc3-utilities\") pod \"certified-operators-zzwgm\" (UID: \"788f4442-73e7-486a-b79c-83560f1c7cc3\") " pod="openshift-marketplace/certified-operators-zzwgm" Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.858787 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/788f4442-73e7-486a-b79c-83560f1c7cc3-catalog-content\") pod \"certified-operators-zzwgm\" (UID: \"788f4442-73e7-486a-b79c-83560f1c7cc3\") " pod="openshift-marketplace/certified-operators-zzwgm" Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.858876 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkxj9\" (UniqueName: \"kubernetes.io/projected/788f4442-73e7-486a-b79c-83560f1c7cc3-kube-api-access-hkxj9\") pod \"certified-operators-zzwgm\" (UID: \"788f4442-73e7-486a-b79c-83560f1c7cc3\") " pod="openshift-marketplace/certified-operators-zzwgm" Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.859421 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/788f4442-73e7-486a-b79c-83560f1c7cc3-utilities\") pod \"certified-operators-zzwgm\" (UID: \"788f4442-73e7-486a-b79c-83560f1c7cc3\") " pod="openshift-marketplace/certified-operators-zzwgm" Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.859434 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/788f4442-73e7-486a-b79c-83560f1c7cc3-catalog-content\") pod \"certified-operators-zzwgm\" (UID: \"788f4442-73e7-486a-b79c-83560f1c7cc3\") " pod="openshift-marketplace/certified-operators-zzwgm" Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.890176 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dx8st"] Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.891395 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dx8st" Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.892467 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkxj9\" (UniqueName: \"kubernetes.io/projected/788f4442-73e7-486a-b79c-83560f1c7cc3-kube-api-access-hkxj9\") pod \"certified-operators-zzwgm\" (UID: \"788f4442-73e7-486a-b79c-83560f1c7cc3\") " pod="openshift-marketplace/certified-operators-zzwgm" Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.893787 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.907257 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dx8st"] Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.959783 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81982f4d-c34f-4617-ae87-9021ffbad391-utilities\") pod \"community-operators-dx8st\" (UID: \"81982f4d-c34f-4617-ae87-9021ffbad391\") " pod="openshift-marketplace/community-operators-dx8st" Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.959866 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81982f4d-c34f-4617-ae87-9021ffbad391-catalog-content\") pod \"community-operators-dx8st\" (UID: \"81982f4d-c34f-4617-ae87-9021ffbad391\") " pod="openshift-marketplace/community-operators-dx8st" Jan 27 06:59:20 crc kubenswrapper[4872]: I0127 06:59:20.960028 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf695\" (UniqueName: \"kubernetes.io/projected/81982f4d-c34f-4617-ae87-9021ffbad391-kube-api-access-mf695\") pod \"community-operators-dx8st\" (UID: \"81982f4d-c34f-4617-ae87-9021ffbad391\") " pod="openshift-marketplace/community-operators-dx8st" Jan 27 06:59:21 crc kubenswrapper[4872]: I0127 06:59:21.004617 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzwgm" Jan 27 06:59:21 crc kubenswrapper[4872]: I0127 06:59:21.060807 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf695\" (UniqueName: \"kubernetes.io/projected/81982f4d-c34f-4617-ae87-9021ffbad391-kube-api-access-mf695\") pod \"community-operators-dx8st\" (UID: \"81982f4d-c34f-4617-ae87-9021ffbad391\") " pod="openshift-marketplace/community-operators-dx8st" Jan 27 06:59:21 crc kubenswrapper[4872]: I0127 06:59:21.060882 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81982f4d-c34f-4617-ae87-9021ffbad391-utilities\") pod \"community-operators-dx8st\" (UID: \"81982f4d-c34f-4617-ae87-9021ffbad391\") " pod="openshift-marketplace/community-operators-dx8st" Jan 27 06:59:21 crc kubenswrapper[4872]: I0127 06:59:21.060911 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81982f4d-c34f-4617-ae87-9021ffbad391-catalog-content\") pod \"community-operators-dx8st\" (UID: \"81982f4d-c34f-4617-ae87-9021ffbad391\") " pod="openshift-marketplace/community-operators-dx8st" Jan 27 06:59:21 crc kubenswrapper[4872]: I0127 06:59:21.061284 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81982f4d-c34f-4617-ae87-9021ffbad391-catalog-content\") pod \"community-operators-dx8st\" (UID: \"81982f4d-c34f-4617-ae87-9021ffbad391\") " pod="openshift-marketplace/community-operators-dx8st" Jan 27 06:59:21 crc kubenswrapper[4872]: I0127 06:59:21.061686 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81982f4d-c34f-4617-ae87-9021ffbad391-utilities\") pod \"community-operators-dx8st\" (UID: \"81982f4d-c34f-4617-ae87-9021ffbad391\") " pod="openshift-marketplace/community-operators-dx8st" Jan 27 06:59:21 crc kubenswrapper[4872]: I0127 06:59:21.096192 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf695\" (UniqueName: \"kubernetes.io/projected/81982f4d-c34f-4617-ae87-9021ffbad391-kube-api-access-mf695\") pod \"community-operators-dx8st\" (UID: \"81982f4d-c34f-4617-ae87-9021ffbad391\") " pod="openshift-marketplace/community-operators-dx8st" Jan 27 06:59:21 crc kubenswrapper[4872]: I0127 06:59:21.223000 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dx8st" Jan 27 06:59:21 crc kubenswrapper[4872]: I0127 06:59:21.419610 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zzwgm"] Jan 27 06:59:21 crc kubenswrapper[4872]: W0127 06:59:21.423996 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod788f4442_73e7_486a_b79c_83560f1c7cc3.slice/crio-c704818232b7deabb963e3b4d1b58454ae4b001a4e960d145873667ea7466407 WatchSource:0}: Error finding container c704818232b7deabb963e3b4d1b58454ae4b001a4e960d145873667ea7466407: Status 404 returned error can't find the container with id c704818232b7deabb963e3b4d1b58454ae4b001a4e960d145873667ea7466407 Jan 27 06:59:21 crc kubenswrapper[4872]: I0127 06:59:21.505068 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzwgm" event={"ID":"788f4442-73e7-486a-b79c-83560f1c7cc3","Type":"ContainerStarted","Data":"c704818232b7deabb963e3b4d1b58454ae4b001a4e960d145873667ea7466407"} Jan 27 06:59:21 crc kubenswrapper[4872]: I0127 06:59:21.507124 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sq6g2" event={"ID":"059bb67f-38aa-492a-8d62-cfc3d3efc41d","Type":"ContainerStarted","Data":"a4f652b8818805eb1056a34528a6b117e1354bee6f985efc00a8e6ebe9e3229d"} Jan 27 06:59:21 crc kubenswrapper[4872]: I0127 06:59:21.610365 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dx8st"] Jan 27 06:59:21 crc kubenswrapper[4872]: W0127 06:59:21.616328 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81982f4d_c34f_4617_ae87_9021ffbad391.slice/crio-6915a80cab62a4bf91e87555c3559cd3defc850b52a7e52a2afbfb5f18d2d87a WatchSource:0}: Error finding container 6915a80cab62a4bf91e87555c3559cd3defc850b52a7e52a2afbfb5f18d2d87a: Status 404 returned error can't find the container with id 6915a80cab62a4bf91e87555c3559cd3defc850b52a7e52a2afbfb5f18d2d87a Jan 27 06:59:22 crc kubenswrapper[4872]: I0127 06:59:22.513944 4872 generic.go:334] "Generic (PLEG): container finished" podID="059bb67f-38aa-492a-8d62-cfc3d3efc41d" containerID="a4f652b8818805eb1056a34528a6b117e1354bee6f985efc00a8e6ebe9e3229d" exitCode=0 Jan 27 06:59:22 crc kubenswrapper[4872]: I0127 06:59:22.514010 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sq6g2" event={"ID":"059bb67f-38aa-492a-8d62-cfc3d3efc41d","Type":"ContainerDied","Data":"a4f652b8818805eb1056a34528a6b117e1354bee6f985efc00a8e6ebe9e3229d"} Jan 27 06:59:22 crc kubenswrapper[4872]: I0127 06:59:22.517978 4872 generic.go:334] "Generic (PLEG): container finished" podID="81982f4d-c34f-4617-ae87-9021ffbad391" containerID="7009c734ff5868a7b95c495927adffd686cbb577d5054fdc7f57745825574e85" exitCode=0 Jan 27 06:59:22 crc kubenswrapper[4872]: I0127 06:59:22.518066 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx8st" event={"ID":"81982f4d-c34f-4617-ae87-9021ffbad391","Type":"ContainerDied","Data":"7009c734ff5868a7b95c495927adffd686cbb577d5054fdc7f57745825574e85"} Jan 27 06:59:22 crc kubenswrapper[4872]: I0127 06:59:22.518122 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx8st" event={"ID":"81982f4d-c34f-4617-ae87-9021ffbad391","Type":"ContainerStarted","Data":"6915a80cab62a4bf91e87555c3559cd3defc850b52a7e52a2afbfb5f18d2d87a"} Jan 27 06:59:22 crc kubenswrapper[4872]: I0127 06:59:22.523394 4872 generic.go:334] "Generic (PLEG): container finished" podID="dc57058f-3a9e-42fb-ae07-53e246ed8fc2" containerID="c20162d7df8a147038beb0cdfbf2cd05a8bc18b235547c0ac72fefac4b6aaaa4" exitCode=0 Jan 27 06:59:22 crc kubenswrapper[4872]: I0127 06:59:22.523462 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lb59" event={"ID":"dc57058f-3a9e-42fb-ae07-53e246ed8fc2","Type":"ContainerDied","Data":"c20162d7df8a147038beb0cdfbf2cd05a8bc18b235547c0ac72fefac4b6aaaa4"} Jan 27 06:59:22 crc kubenswrapper[4872]: I0127 06:59:22.525511 4872 generic.go:334] "Generic (PLEG): container finished" podID="788f4442-73e7-486a-b79c-83560f1c7cc3" containerID="79cfdea3a8ebdd8e18169bcc043f0370956f0a098fde85c539d74f460ad09c89" exitCode=0 Jan 27 06:59:22 crc kubenswrapper[4872]: I0127 06:59:22.525546 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzwgm" event={"ID":"788f4442-73e7-486a-b79c-83560f1c7cc3","Type":"ContainerDied","Data":"79cfdea3a8ebdd8e18169bcc043f0370956f0a098fde85c539d74f460ad09c89"} Jan 27 06:59:23 crc kubenswrapper[4872]: I0127 06:59:23.544363 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sq6g2" event={"ID":"059bb67f-38aa-492a-8d62-cfc3d3efc41d","Type":"ContainerStarted","Data":"fabaf851374422f8445701640b439377d5132102c11766bd3bdb2210336284b8"} Jan 27 06:59:23 crc kubenswrapper[4872]: I0127 06:59:23.551873 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lb59" event={"ID":"dc57058f-3a9e-42fb-ae07-53e246ed8fc2","Type":"ContainerStarted","Data":"743ad9c54202f9c60bea92d548089a75e64d5215feee5a526f7c6e1379d7e557"} Jan 27 06:59:23 crc kubenswrapper[4872]: I0127 06:59:23.555667 4872 generic.go:334] "Generic (PLEG): container finished" podID="788f4442-73e7-486a-b79c-83560f1c7cc3" containerID="9a5cc6e01449dcb79e518f06f946eeee5c64c6499cede0dc75b321dbd5afeca3" exitCode=0 Jan 27 06:59:23 crc kubenswrapper[4872]: I0127 06:59:23.555723 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzwgm" event={"ID":"788f4442-73e7-486a-b79c-83560f1c7cc3","Type":"ContainerDied","Data":"9a5cc6e01449dcb79e518f06f946eeee5c64c6499cede0dc75b321dbd5afeca3"} Jan 27 06:59:23 crc kubenswrapper[4872]: I0127 06:59:23.564311 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sq6g2" podStartSLOduration=3.143430309 podStartE2EDuration="5.56429641s" podCreationTimestamp="2026-01-27 06:59:18 +0000 UTC" firstStartedPulling="2026-01-27 06:59:20.501135196 +0000 UTC m=+337.028610392" lastFinishedPulling="2026-01-27 06:59:22.922001297 +0000 UTC m=+339.449476493" observedRunningTime="2026-01-27 06:59:23.564024293 +0000 UTC m=+340.091499489" watchObservedRunningTime="2026-01-27 06:59:23.56429641 +0000 UTC m=+340.091771606" Jan 27 06:59:23 crc kubenswrapper[4872]: I0127 06:59:23.609588 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5lb59" podStartSLOduration=3.123602928 podStartE2EDuration="5.609571233s" podCreationTimestamp="2026-01-27 06:59:18 +0000 UTC" firstStartedPulling="2026-01-27 06:59:20.501145797 +0000 UTC m=+337.028620993" lastFinishedPulling="2026-01-27 06:59:22.987114082 +0000 UTC m=+339.514589298" observedRunningTime="2026-01-27 06:59:23.606186349 +0000 UTC m=+340.133661565" watchObservedRunningTime="2026-01-27 06:59:23.609571233 +0000 UTC m=+340.137046429" Jan 27 06:59:24 crc kubenswrapper[4872]: I0127 06:59:24.562520 4872 generic.go:334] "Generic (PLEG): container finished" podID="81982f4d-c34f-4617-ae87-9021ffbad391" containerID="02ae1de2514256bff92ef8b070a7cfdacdda466a3bea83bfc24b9892380ac602" exitCode=0 Jan 27 06:59:24 crc kubenswrapper[4872]: I0127 06:59:24.562638 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx8st" event={"ID":"81982f4d-c34f-4617-ae87-9021ffbad391","Type":"ContainerDied","Data":"02ae1de2514256bff92ef8b070a7cfdacdda466a3bea83bfc24b9892380ac602"} Jan 27 06:59:24 crc kubenswrapper[4872]: I0127 06:59:24.571238 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzwgm" event={"ID":"788f4442-73e7-486a-b79c-83560f1c7cc3","Type":"ContainerStarted","Data":"429c358eed14ef215a5e8a4c118f61074e999d10c2bb903d0b386b70cc325dcf"} Jan 27 06:59:24 crc kubenswrapper[4872]: I0127 06:59:24.612016 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zzwgm" podStartSLOduration=3.198357125 podStartE2EDuration="4.611998453s" podCreationTimestamp="2026-01-27 06:59:20 +0000 UTC" firstStartedPulling="2026-01-27 06:59:22.533724788 +0000 UTC m=+339.061199984" lastFinishedPulling="2026-01-27 06:59:23.947366106 +0000 UTC m=+340.474841312" observedRunningTime="2026-01-27 06:59:24.609979627 +0000 UTC m=+341.137454833" watchObservedRunningTime="2026-01-27 06:59:24.611998453 +0000 UTC m=+341.139473649" Jan 27 06:59:25 crc kubenswrapper[4872]: I0127 06:59:25.000934 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:59:25 crc kubenswrapper[4872]: I0127 06:59:25.001008 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:59:25 crc kubenswrapper[4872]: I0127 06:59:25.605096 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx8st" event={"ID":"81982f4d-c34f-4617-ae87-9021ffbad391","Type":"ContainerStarted","Data":"6cb0983c5c8c5bfa7a4c8223b9c1ebee268735976648f9f3d55bd010ffac0376"} Jan 27 06:59:28 crc kubenswrapper[4872]: I0127 06:59:28.612521 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5lb59" Jan 27 06:59:28 crc kubenswrapper[4872]: I0127 06:59:28.613066 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5lb59" Jan 27 06:59:28 crc kubenswrapper[4872]: I0127 06:59:28.660902 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5lb59" Jan 27 06:59:28 crc kubenswrapper[4872]: I0127 06:59:28.682881 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dx8st" podStartSLOduration=6.252243825 podStartE2EDuration="8.682865983s" podCreationTimestamp="2026-01-27 06:59:20 +0000 UTC" firstStartedPulling="2026-01-27 06:59:22.52283321 +0000 UTC m=+339.050308406" lastFinishedPulling="2026-01-27 06:59:24.953455368 +0000 UTC m=+341.480930564" observedRunningTime="2026-01-27 06:59:25.639151882 +0000 UTC m=+342.166627078" watchObservedRunningTime="2026-01-27 06:59:28.682865983 +0000 UTC m=+345.210341179" Jan 27 06:59:28 crc kubenswrapper[4872]: I0127 06:59:28.811157 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sq6g2" Jan 27 06:59:28 crc kubenswrapper[4872]: I0127 06:59:28.811216 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sq6g2" Jan 27 06:59:29 crc kubenswrapper[4872]: I0127 06:59:29.665496 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5lb59" Jan 27 06:59:29 crc kubenswrapper[4872]: I0127 06:59:29.858583 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sq6g2" podUID="059bb67f-38aa-492a-8d62-cfc3d3efc41d" containerName="registry-server" probeResult="failure" output=< Jan 27 06:59:29 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Jan 27 06:59:29 crc kubenswrapper[4872]: > Jan 27 06:59:30 crc kubenswrapper[4872]: I0127 06:59:30.042719 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-cp9cf" Jan 27 06:59:30 crc kubenswrapper[4872]: I0127 06:59:30.096975 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rthkh"] Jan 27 06:59:31 crc kubenswrapper[4872]: I0127 06:59:31.005755 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zzwgm" Jan 27 06:59:31 crc kubenswrapper[4872]: I0127 06:59:31.006702 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zzwgm" Jan 27 06:59:31 crc kubenswrapper[4872]: I0127 06:59:31.049186 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zzwgm" Jan 27 06:59:31 crc kubenswrapper[4872]: I0127 06:59:31.224270 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dx8st" Jan 27 06:59:31 crc kubenswrapper[4872]: I0127 06:59:31.224622 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dx8st" Jan 27 06:59:31 crc kubenswrapper[4872]: I0127 06:59:31.269116 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dx8st" Jan 27 06:59:31 crc kubenswrapper[4872]: I0127 06:59:31.669050 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zzwgm" Jan 27 06:59:31 crc kubenswrapper[4872]: I0127 06:59:31.671079 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dx8st" Jan 27 06:59:38 crc kubenswrapper[4872]: I0127 06:59:38.846683 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sq6g2" Jan 27 06:59:38 crc kubenswrapper[4872]: I0127 06:59:38.889130 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sq6g2" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.001540 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.003088 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.148220 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" podUID="c17752d4-ad64-445d-882f-134f79928b40" containerName="registry" containerID="cri-o://644b4111cf88d869ba9f84f85d675c0daa97575f19c0f62ba45cc0fc41ecb3df" gracePeriod=30 Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.459415 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.560356 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c17752d4-ad64-445d-882f-134f79928b40-registry-certificates\") pod \"c17752d4-ad64-445d-882f-134f79928b40\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.560438 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c17752d4-ad64-445d-882f-134f79928b40-trusted-ca\") pod \"c17752d4-ad64-445d-882f-134f79928b40\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.560458 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-bound-sa-token\") pod \"c17752d4-ad64-445d-882f-134f79928b40\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.560476 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-registry-tls\") pod \"c17752d4-ad64-445d-882f-134f79928b40\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.560653 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c17752d4-ad64-445d-882f-134f79928b40\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.560677 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c17752d4-ad64-445d-882f-134f79928b40-ca-trust-extracted\") pod \"c17752d4-ad64-445d-882f-134f79928b40\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.560715 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btmdk\" (UniqueName: \"kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-kube-api-access-btmdk\") pod \"c17752d4-ad64-445d-882f-134f79928b40\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.560737 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c17752d4-ad64-445d-882f-134f79928b40-installation-pull-secrets\") pod \"c17752d4-ad64-445d-882f-134f79928b40\" (UID: \"c17752d4-ad64-445d-882f-134f79928b40\") " Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.562520 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c17752d4-ad64-445d-882f-134f79928b40-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c17752d4-ad64-445d-882f-134f79928b40" (UID: "c17752d4-ad64-445d-882f-134f79928b40"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.566369 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c17752d4-ad64-445d-882f-134f79928b40" (UID: "c17752d4-ad64-445d-882f-134f79928b40"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.566476 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c17752d4-ad64-445d-882f-134f79928b40" (UID: "c17752d4-ad64-445d-882f-134f79928b40"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.567963 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-kube-api-access-btmdk" (OuterVolumeSpecName: "kube-api-access-btmdk") pod "c17752d4-ad64-445d-882f-134f79928b40" (UID: "c17752d4-ad64-445d-882f-134f79928b40"). InnerVolumeSpecName "kube-api-access-btmdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.569238 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c17752d4-ad64-445d-882f-134f79928b40-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c17752d4-ad64-445d-882f-134f79928b40" (UID: "c17752d4-ad64-445d-882f-134f79928b40"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.574521 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c17752d4-ad64-445d-882f-134f79928b40-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c17752d4-ad64-445d-882f-134f79928b40" (UID: "c17752d4-ad64-445d-882f-134f79928b40"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.577972 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c17752d4-ad64-445d-882f-134f79928b40-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c17752d4-ad64-445d-882f-134f79928b40" (UID: "c17752d4-ad64-445d-882f-134f79928b40"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.583975 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c17752d4-ad64-445d-882f-134f79928b40" (UID: "c17752d4-ad64-445d-882f-134f79928b40"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.661636 4872 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c17752d4-ad64-445d-882f-134f79928b40-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.661669 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btmdk\" (UniqueName: \"kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-kube-api-access-btmdk\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.661680 4872 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c17752d4-ad64-445d-882f-134f79928b40-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.661690 4872 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c17752d4-ad64-445d-882f-134f79928b40-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.661701 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c17752d4-ad64-445d-882f-134f79928b40-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.661709 4872 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.661717 4872 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c17752d4-ad64-445d-882f-134f79928b40-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.751290 4872 generic.go:334] "Generic (PLEG): container finished" podID="c17752d4-ad64-445d-882f-134f79928b40" containerID="644b4111cf88d869ba9f84f85d675c0daa97575f19c0f62ba45cc0fc41ecb3df" exitCode=0 Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.751342 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" event={"ID":"c17752d4-ad64-445d-882f-134f79928b40","Type":"ContainerDied","Data":"644b4111cf88d869ba9f84f85d675c0daa97575f19c0f62ba45cc0fc41ecb3df"} Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.751371 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" event={"ID":"c17752d4-ad64-445d-882f-134f79928b40","Type":"ContainerDied","Data":"f38c900663ef2977346e360d787b7e3b7476c27709c1f92a97cfe0985958ba75"} Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.751391 4872 scope.go:117] "RemoveContainer" containerID="644b4111cf88d869ba9f84f85d675c0daa97575f19c0f62ba45cc0fc41ecb3df" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.752079 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rthkh" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.767365 4872 scope.go:117] "RemoveContainer" containerID="644b4111cf88d869ba9f84f85d675c0daa97575f19c0f62ba45cc0fc41ecb3df" Jan 27 06:59:55 crc kubenswrapper[4872]: E0127 06:59:55.767927 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"644b4111cf88d869ba9f84f85d675c0daa97575f19c0f62ba45cc0fc41ecb3df\": container with ID starting with 644b4111cf88d869ba9f84f85d675c0daa97575f19c0f62ba45cc0fc41ecb3df not found: ID does not exist" containerID="644b4111cf88d869ba9f84f85d675c0daa97575f19c0f62ba45cc0fc41ecb3df" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.767964 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"644b4111cf88d869ba9f84f85d675c0daa97575f19c0f62ba45cc0fc41ecb3df"} err="failed to get container status \"644b4111cf88d869ba9f84f85d675c0daa97575f19c0f62ba45cc0fc41ecb3df\": rpc error: code = NotFound desc = could not find container \"644b4111cf88d869ba9f84f85d675c0daa97575f19c0f62ba45cc0fc41ecb3df\": container with ID starting with 644b4111cf88d869ba9f84f85d675c0daa97575f19c0f62ba45cc0fc41ecb3df not found: ID does not exist" Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.789229 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rthkh"] Jan 27 06:59:55 crc kubenswrapper[4872]: I0127 06:59:55.791929 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rthkh"] Jan 27 06:59:56 crc kubenswrapper[4872]: I0127 06:59:56.104469 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c17752d4-ad64-445d-882f-134f79928b40" path="/var/lib/kubelet/pods/c17752d4-ad64-445d-882f-134f79928b40/volumes" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.173618 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff"] Jan 27 07:00:00 crc kubenswrapper[4872]: E0127 07:00:00.174153 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c17752d4-ad64-445d-882f-134f79928b40" containerName="registry" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.174166 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="c17752d4-ad64-445d-882f-134f79928b40" containerName="registry" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.174276 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="c17752d4-ad64-445d-882f-134f79928b40" containerName="registry" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.174637 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.178609 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.179221 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.182181 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff"] Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.316658 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-config-volume\") pod \"collect-profiles-29491620-6wcff\" (UID: \"f3b5ef0e-d005-4681-a1f8-1a2415a3513a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.317091 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-secret-volume\") pod \"collect-profiles-29491620-6wcff\" (UID: \"f3b5ef0e-d005-4681-a1f8-1a2415a3513a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.317249 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2hw4\" (UniqueName: \"kubernetes.io/projected/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-kube-api-access-m2hw4\") pod \"collect-profiles-29491620-6wcff\" (UID: \"f3b5ef0e-d005-4681-a1f8-1a2415a3513a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.418576 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-config-volume\") pod \"collect-profiles-29491620-6wcff\" (UID: \"f3b5ef0e-d005-4681-a1f8-1a2415a3513a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.418630 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-secret-volume\") pod \"collect-profiles-29491620-6wcff\" (UID: \"f3b5ef0e-d005-4681-a1f8-1a2415a3513a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.418674 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2hw4\" (UniqueName: \"kubernetes.io/projected/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-kube-api-access-m2hw4\") pod \"collect-profiles-29491620-6wcff\" (UID: \"f3b5ef0e-d005-4681-a1f8-1a2415a3513a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.419715 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-config-volume\") pod \"collect-profiles-29491620-6wcff\" (UID: \"f3b5ef0e-d005-4681-a1f8-1a2415a3513a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.427665 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-secret-volume\") pod \"collect-profiles-29491620-6wcff\" (UID: \"f3b5ef0e-d005-4681-a1f8-1a2415a3513a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.437663 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2hw4\" (UniqueName: \"kubernetes.io/projected/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-kube-api-access-m2hw4\") pod \"collect-profiles-29491620-6wcff\" (UID: \"f3b5ef0e-d005-4681-a1f8-1a2415a3513a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.515722 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff" Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.692378 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff"] Jan 27 07:00:00 crc kubenswrapper[4872]: I0127 07:00:00.778745 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff" event={"ID":"f3b5ef0e-d005-4681-a1f8-1a2415a3513a","Type":"ContainerStarted","Data":"05f6bc09d2d021db3c752f6ca1971e00993ef3666e0a72a12a6fdb53199ba06d"} Jan 27 07:00:01 crc kubenswrapper[4872]: I0127 07:00:01.784140 4872 generic.go:334] "Generic (PLEG): container finished" podID="f3b5ef0e-d005-4681-a1f8-1a2415a3513a" containerID="a84020e4636533cad134f7ee0f930de2494c66df26241503194d327d062c2f84" exitCode=0 Jan 27 07:00:01 crc kubenswrapper[4872]: I0127 07:00:01.784185 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff" event={"ID":"f3b5ef0e-d005-4681-a1f8-1a2415a3513a","Type":"ContainerDied","Data":"a84020e4636533cad134f7ee0f930de2494c66df26241503194d327d062c2f84"} Jan 27 07:00:02 crc kubenswrapper[4872]: I0127 07:00:02.978908 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff" Jan 27 07:00:03 crc kubenswrapper[4872]: I0127 07:00:03.148755 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2hw4\" (UniqueName: \"kubernetes.io/projected/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-kube-api-access-m2hw4\") pod \"f3b5ef0e-d005-4681-a1f8-1a2415a3513a\" (UID: \"f3b5ef0e-d005-4681-a1f8-1a2415a3513a\") " Jan 27 07:00:03 crc kubenswrapper[4872]: I0127 07:00:03.148889 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-config-volume\") pod \"f3b5ef0e-d005-4681-a1f8-1a2415a3513a\" (UID: \"f3b5ef0e-d005-4681-a1f8-1a2415a3513a\") " Jan 27 07:00:03 crc kubenswrapper[4872]: I0127 07:00:03.148964 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-secret-volume\") pod \"f3b5ef0e-d005-4681-a1f8-1a2415a3513a\" (UID: \"f3b5ef0e-d005-4681-a1f8-1a2415a3513a\") " Jan 27 07:00:03 crc kubenswrapper[4872]: I0127 07:00:03.149297 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-config-volume" (OuterVolumeSpecName: "config-volume") pod "f3b5ef0e-d005-4681-a1f8-1a2415a3513a" (UID: "f3b5ef0e-d005-4681-a1f8-1a2415a3513a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:00:03 crc kubenswrapper[4872]: I0127 07:00:03.153092 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-kube-api-access-m2hw4" (OuterVolumeSpecName: "kube-api-access-m2hw4") pod "f3b5ef0e-d005-4681-a1f8-1a2415a3513a" (UID: "f3b5ef0e-d005-4681-a1f8-1a2415a3513a"). InnerVolumeSpecName "kube-api-access-m2hw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:00:03 crc kubenswrapper[4872]: I0127 07:00:03.153158 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f3b5ef0e-d005-4681-a1f8-1a2415a3513a" (UID: "f3b5ef0e-d005-4681-a1f8-1a2415a3513a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:00:03 crc kubenswrapper[4872]: I0127 07:00:03.250043 4872 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:03 crc kubenswrapper[4872]: I0127 07:00:03.250076 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2hw4\" (UniqueName: \"kubernetes.io/projected/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-kube-api-access-m2hw4\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:03 crc kubenswrapper[4872]: I0127 07:00:03.250087 4872 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3b5ef0e-d005-4681-a1f8-1a2415a3513a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:03 crc kubenswrapper[4872]: I0127 07:00:03.795031 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff" event={"ID":"f3b5ef0e-d005-4681-a1f8-1a2415a3513a","Type":"ContainerDied","Data":"05f6bc09d2d021db3c752f6ca1971e00993ef3666e0a72a12a6fdb53199ba06d"} Jan 27 07:00:03 crc kubenswrapper[4872]: I0127 07:00:03.795073 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05f6bc09d2d021db3c752f6ca1971e00993ef3666e0a72a12a6fdb53199ba06d" Jan 27 07:00:03 crc kubenswrapper[4872]: I0127 07:00:03.795112 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-6wcff" Jan 27 07:00:25 crc kubenswrapper[4872]: I0127 07:00:25.001638 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:00:25 crc kubenswrapper[4872]: I0127 07:00:25.002244 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:00:25 crc kubenswrapper[4872]: I0127 07:00:25.002293 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 07:00:25 crc kubenswrapper[4872]: I0127 07:00:25.002879 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"80e90793575fa0faed445037a9d207d5dd942489472202844d524e334d89cc5a"} pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:00:25 crc kubenswrapper[4872]: I0127 07:00:25.002932 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" containerID="cri-o://80e90793575fa0faed445037a9d207d5dd942489472202844d524e334d89cc5a" gracePeriod=600 Jan 27 07:00:25 crc kubenswrapper[4872]: I0127 07:00:25.916218 4872 generic.go:334] "Generic (PLEG): container finished" podID="5ea42312-a362-48cd-8387-34c060df18a1" containerID="80e90793575fa0faed445037a9d207d5dd942489472202844d524e334d89cc5a" exitCode=0 Jan 27 07:00:25 crc kubenswrapper[4872]: I0127 07:00:25.916335 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerDied","Data":"80e90793575fa0faed445037a9d207d5dd942489472202844d524e334d89cc5a"} Jan 27 07:00:25 crc kubenswrapper[4872]: I0127 07:00:25.916812 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerStarted","Data":"0caf69cb9e39f3837852069683cde633e818b7d540e32a37fd7b20bbc8a7756d"} Jan 27 07:00:25 crc kubenswrapper[4872]: I0127 07:00:25.916867 4872 scope.go:117] "RemoveContainer" containerID="21d707a67fff28aed15595440427ef161afbf1ddd07cc3b5957e2baab48d4de6" Jan 27 07:02:25 crc kubenswrapper[4872]: I0127 07:02:25.001207 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:02:25 crc kubenswrapper[4872]: I0127 07:02:25.001732 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:02:55 crc kubenswrapper[4872]: I0127 07:02:55.001895 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:02:55 crc kubenswrapper[4872]: I0127 07:02:55.002406 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:03:25 crc kubenswrapper[4872]: I0127 07:03:25.001728 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:03:25 crc kubenswrapper[4872]: I0127 07:03:25.003174 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:03:25 crc kubenswrapper[4872]: I0127 07:03:25.003247 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 07:03:25 crc kubenswrapper[4872]: I0127 07:03:25.004049 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0caf69cb9e39f3837852069683cde633e818b7d540e32a37fd7b20bbc8a7756d"} pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:03:25 crc kubenswrapper[4872]: I0127 07:03:25.004153 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" containerID="cri-o://0caf69cb9e39f3837852069683cde633e818b7d540e32a37fd7b20bbc8a7756d" gracePeriod=600 Jan 27 07:03:25 crc kubenswrapper[4872]: I0127 07:03:25.840540 4872 generic.go:334] "Generic (PLEG): container finished" podID="5ea42312-a362-48cd-8387-34c060df18a1" containerID="0caf69cb9e39f3837852069683cde633e818b7d540e32a37fd7b20bbc8a7756d" exitCode=0 Jan 27 07:03:25 crc kubenswrapper[4872]: I0127 07:03:25.840769 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerDied","Data":"0caf69cb9e39f3837852069683cde633e818b7d540e32a37fd7b20bbc8a7756d"} Jan 27 07:03:25 crc kubenswrapper[4872]: I0127 07:03:25.840911 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerStarted","Data":"462f190318026a23f51e6663c8fa4063ee29388a381bdeb37568ca3834d2b1f2"} Jan 27 07:03:25 crc kubenswrapper[4872]: I0127 07:03:25.840932 4872 scope.go:117] "RemoveContainer" containerID="80e90793575fa0faed445037a9d207d5dd942489472202844d524e334d89cc5a" Jan 27 07:03:44 crc kubenswrapper[4872]: I0127 07:03:44.296219 4872 scope.go:117] "RemoveContainer" containerID="5a3396e58ccb076dbc83ce927ea16dc9cbfcfdd4979b0956fbcadde73c70891b" Jan 27 07:03:44 crc kubenswrapper[4872]: I0127 07:03:44.311034 4872 scope.go:117] "RemoveContainer" containerID="bb111da8bc23b83ff7a68f0c891e47ff454879501c423c7e0cf5934851808faa" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.533655 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-pvs6b"] Jan 27 07:04:45 crc kubenswrapper[4872]: E0127 07:04:45.535455 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b5ef0e-d005-4681-a1f8-1a2415a3513a" containerName="collect-profiles" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.535562 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b5ef0e-d005-4681-a1f8-1a2415a3513a" containerName="collect-profiles" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.535769 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3b5ef0e-d005-4681-a1f8-1a2415a3513a" containerName="collect-profiles" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.536329 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pvs6b" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.539674 4872 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-s69xt" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.539802 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.539883 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.549790 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-2bbbk"] Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.550411 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2bbbk" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.554637 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-pvs6b"] Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.554742 4872 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-mmghm" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.560143 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2bbbk"] Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.574810 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-l6kbm"] Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.575478 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-l6kbm" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.578431 4872 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-w28ms" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.596744 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-l6kbm"] Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.624595 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrpjs\" (UniqueName: \"kubernetes.io/projected/0f5348ad-9399-4799-96c9-972a71d03900-kube-api-access-zrpjs\") pod \"cert-manager-cainjector-cf98fcc89-pvs6b\" (UID: \"0f5348ad-9399-4799-96c9-972a71d03900\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-pvs6b" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.624668 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzzf4\" (UniqueName: \"kubernetes.io/projected/8b332b36-0066-48c8-a411-12cfe9eefdae-kube-api-access-bzzf4\") pod \"cert-manager-858654f9db-2bbbk\" (UID: \"8b332b36-0066-48c8-a411-12cfe9eefdae\") " pod="cert-manager/cert-manager-858654f9db-2bbbk" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.624701 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csnft\" (UniqueName: \"kubernetes.io/projected/9f718d42-d842-42f7-bafc-a10d960c3555-kube-api-access-csnft\") pod \"cert-manager-webhook-687f57d79b-l6kbm\" (UID: \"9f718d42-d842-42f7-bafc-a10d960c3555\") " pod="cert-manager/cert-manager-webhook-687f57d79b-l6kbm" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.725425 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csnft\" (UniqueName: \"kubernetes.io/projected/9f718d42-d842-42f7-bafc-a10d960c3555-kube-api-access-csnft\") pod \"cert-manager-webhook-687f57d79b-l6kbm\" (UID: \"9f718d42-d842-42f7-bafc-a10d960c3555\") " pod="cert-manager/cert-manager-webhook-687f57d79b-l6kbm" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.725505 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrpjs\" (UniqueName: \"kubernetes.io/projected/0f5348ad-9399-4799-96c9-972a71d03900-kube-api-access-zrpjs\") pod \"cert-manager-cainjector-cf98fcc89-pvs6b\" (UID: \"0f5348ad-9399-4799-96c9-972a71d03900\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-pvs6b" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.725561 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzzf4\" (UniqueName: \"kubernetes.io/projected/8b332b36-0066-48c8-a411-12cfe9eefdae-kube-api-access-bzzf4\") pod \"cert-manager-858654f9db-2bbbk\" (UID: \"8b332b36-0066-48c8-a411-12cfe9eefdae\") " pod="cert-manager/cert-manager-858654f9db-2bbbk" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.748160 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzzf4\" (UniqueName: \"kubernetes.io/projected/8b332b36-0066-48c8-a411-12cfe9eefdae-kube-api-access-bzzf4\") pod \"cert-manager-858654f9db-2bbbk\" (UID: \"8b332b36-0066-48c8-a411-12cfe9eefdae\") " pod="cert-manager/cert-manager-858654f9db-2bbbk" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.755968 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrpjs\" (UniqueName: \"kubernetes.io/projected/0f5348ad-9399-4799-96c9-972a71d03900-kube-api-access-zrpjs\") pod \"cert-manager-cainjector-cf98fcc89-pvs6b\" (UID: \"0f5348ad-9399-4799-96c9-972a71d03900\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-pvs6b" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.756423 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csnft\" (UniqueName: \"kubernetes.io/projected/9f718d42-d842-42f7-bafc-a10d960c3555-kube-api-access-csnft\") pod \"cert-manager-webhook-687f57d79b-l6kbm\" (UID: \"9f718d42-d842-42f7-bafc-a10d960c3555\") " pod="cert-manager/cert-manager-webhook-687f57d79b-l6kbm" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.850021 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pvs6b" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.861452 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2bbbk" Jan 27 07:04:45 crc kubenswrapper[4872]: I0127 07:04:45.888442 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-l6kbm" Jan 27 07:04:46 crc kubenswrapper[4872]: I0127 07:04:46.061279 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-pvs6b"] Jan 27 07:04:46 crc kubenswrapper[4872]: W0127 07:04:46.082287 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f5348ad_9399_4799_96c9_972a71d03900.slice/crio-2041fc161a2f51a26b692eac5aa234651959f17d9532a4230274c8ace361d7fb WatchSource:0}: Error finding container 2041fc161a2f51a26b692eac5aa234651959f17d9532a4230274c8ace361d7fb: Status 404 returned error can't find the container with id 2041fc161a2f51a26b692eac5aa234651959f17d9532a4230274c8ace361d7fb Jan 27 07:04:46 crc kubenswrapper[4872]: I0127 07:04:46.085757 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 07:04:46 crc kubenswrapper[4872]: I0127 07:04:46.137813 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2bbbk"] Jan 27 07:04:46 crc kubenswrapper[4872]: W0127 07:04:46.140657 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b332b36_0066_48c8_a411_12cfe9eefdae.slice/crio-c3ff4426246e3c9448ac0156d37932b822e4b5515d37e242c9ad0d35d1706d6c WatchSource:0}: Error finding container c3ff4426246e3c9448ac0156d37932b822e4b5515d37e242c9ad0d35d1706d6c: Status 404 returned error can't find the container with id c3ff4426246e3c9448ac0156d37932b822e4b5515d37e242c9ad0d35d1706d6c Jan 27 07:04:46 crc kubenswrapper[4872]: I0127 07:04:46.225859 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2bbbk" event={"ID":"8b332b36-0066-48c8-a411-12cfe9eefdae","Type":"ContainerStarted","Data":"c3ff4426246e3c9448ac0156d37932b822e4b5515d37e242c9ad0d35d1706d6c"} Jan 27 07:04:46 crc kubenswrapper[4872]: I0127 07:04:46.226687 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pvs6b" event={"ID":"0f5348ad-9399-4799-96c9-972a71d03900","Type":"ContainerStarted","Data":"2041fc161a2f51a26b692eac5aa234651959f17d9532a4230274c8ace361d7fb"} Jan 27 07:04:46 crc kubenswrapper[4872]: I0127 07:04:46.413742 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-l6kbm"] Jan 27 07:04:46 crc kubenswrapper[4872]: W0127 07:04:46.415877 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f718d42_d842_42f7_bafc_a10d960c3555.slice/crio-3dfed14d097798325abf9beb205225ff1f7843929286eb1f504b09a06da21cdb WatchSource:0}: Error finding container 3dfed14d097798325abf9beb205225ff1f7843929286eb1f504b09a06da21cdb: Status 404 returned error can't find the container with id 3dfed14d097798325abf9beb205225ff1f7843929286eb1f504b09a06da21cdb Jan 27 07:04:47 crc kubenswrapper[4872]: I0127 07:04:47.232959 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-l6kbm" event={"ID":"9f718d42-d842-42f7-bafc-a10d960c3555","Type":"ContainerStarted","Data":"3dfed14d097798325abf9beb205225ff1f7843929286eb1f504b09a06da21cdb"} Jan 27 07:04:49 crc kubenswrapper[4872]: I0127 07:04:49.242718 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pvs6b" event={"ID":"0f5348ad-9399-4799-96c9-972a71d03900","Type":"ContainerStarted","Data":"17de255d712e39d70fdcd72d57a845c2b1954b8a6ecf64ca122d7e236967b7e1"} Jan 27 07:04:49 crc kubenswrapper[4872]: I0127 07:04:49.260413 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pvs6b" podStartSLOduration=1.694572698 podStartE2EDuration="4.260394645s" podCreationTimestamp="2026-01-27 07:04:45 +0000 UTC" firstStartedPulling="2026-01-27 07:04:46.085503907 +0000 UTC m=+662.612979103" lastFinishedPulling="2026-01-27 07:04:48.651325844 +0000 UTC m=+665.178801050" observedRunningTime="2026-01-27 07:04:49.2580036 +0000 UTC m=+665.785478816" watchObservedRunningTime="2026-01-27 07:04:49.260394645 +0000 UTC m=+665.787869841" Jan 27 07:04:51 crc kubenswrapper[4872]: I0127 07:04:51.255634 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-l6kbm" event={"ID":"9f718d42-d842-42f7-bafc-a10d960c3555","Type":"ContainerStarted","Data":"51bd312f5f69b7963589456e4e2b6a25073bc95e243b1f917b8853eeef1afbe4"} Jan 27 07:04:51 crc kubenswrapper[4872]: I0127 07:04:51.257861 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-l6kbm" Jan 27 07:04:51 crc kubenswrapper[4872]: I0127 07:04:51.258869 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2bbbk" event={"ID":"8b332b36-0066-48c8-a411-12cfe9eefdae","Type":"ContainerStarted","Data":"5d1d63e737707850502da78c44835cc81d54ba858d74a526e0ad16814d70a988"} Jan 27 07:04:51 crc kubenswrapper[4872]: I0127 07:04:51.284279 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-l6kbm" podStartSLOduration=2.248656607 podStartE2EDuration="6.28423797s" podCreationTimestamp="2026-01-27 07:04:45 +0000 UTC" firstStartedPulling="2026-01-27 07:04:46.418005926 +0000 UTC m=+662.945481132" lastFinishedPulling="2026-01-27 07:04:50.453587299 +0000 UTC m=+666.981062495" observedRunningTime="2026-01-27 07:04:51.271747531 +0000 UTC m=+667.799222727" watchObservedRunningTime="2026-01-27 07:04:51.28423797 +0000 UTC m=+667.811713166" Jan 27 07:04:51 crc kubenswrapper[4872]: I0127 07:04:51.294333 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-2bbbk" podStartSLOduration=1.983252218 podStartE2EDuration="6.294314563s" podCreationTimestamp="2026-01-27 07:04:45 +0000 UTC" firstStartedPulling="2026-01-27 07:04:46.141466625 +0000 UTC m=+662.668941811" lastFinishedPulling="2026-01-27 07:04:50.45252896 +0000 UTC m=+666.980004156" observedRunningTime="2026-01-27 07:04:51.292195266 +0000 UTC m=+667.819670462" watchObservedRunningTime="2026-01-27 07:04:51.294314563 +0000 UTC m=+667.821789759" Jan 27 07:04:54 crc kubenswrapper[4872]: I0127 07:04:54.994165 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ww8p7"] Jan 27 07:04:54 crc kubenswrapper[4872]: I0127 07:04:54.995007 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovn-controller" containerID="cri-o://33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9" gracePeriod=30 Jan 27 07:04:54 crc kubenswrapper[4872]: I0127 07:04:54.995411 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="sbdb" containerID="cri-o://2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475" gracePeriod=30 Jan 27 07:04:54 crc kubenswrapper[4872]: I0127 07:04:54.995456 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="nbdb" containerID="cri-o://d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1" gracePeriod=30 Jan 27 07:04:54 crc kubenswrapper[4872]: I0127 07:04:54.995496 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="northd" containerID="cri-o://fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf" gracePeriod=30 Jan 27 07:04:54 crc kubenswrapper[4872]: I0127 07:04:54.995535 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1" gracePeriod=30 Jan 27 07:04:54 crc kubenswrapper[4872]: I0127 07:04:54.995572 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="kube-rbac-proxy-node" containerID="cri-o://1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502" gracePeriod=30 Jan 27 07:04:54 crc kubenswrapper[4872]: I0127 07:04:54.995622 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovn-acl-logging" containerID="cri-o://283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564" gracePeriod=30 Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.041363 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovnkube-controller" containerID="cri-o://8bc38bef1b1dcd10d87a35880278719bae48d0b4cf71a91c3224f4828b6ed04d" gracePeriod=30 Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.282973 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nvjgr_8575a338-fc73-4413-ab05-0fdfdd6bdf2d/kube-multus/2.log" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.283596 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nvjgr_8575a338-fc73-4413-ab05-0fdfdd6bdf2d/kube-multus/1.log" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.283651 4872 generic.go:334] "Generic (PLEG): container finished" podID="8575a338-fc73-4413-ab05-0fdfdd6bdf2d" containerID="9d582de30cd4995904c662993a12d753218a97f46976ef04e2bdb771d2ac83bb" exitCode=2 Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.283709 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nvjgr" event={"ID":"8575a338-fc73-4413-ab05-0fdfdd6bdf2d","Type":"ContainerDied","Data":"9d582de30cd4995904c662993a12d753218a97f46976ef04e2bdb771d2ac83bb"} Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.283751 4872 scope.go:117] "RemoveContainer" containerID="07b8c08bb7d393cd9ee35fe3ab28fd559dd83a249f46189803c4b3fc5a4aff20" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.284366 4872 scope.go:117] "RemoveContainer" containerID="9d582de30cd4995904c662993a12d753218a97f46976ef04e2bdb771d2ac83bb" Jan 27 07:04:55 crc kubenswrapper[4872]: E0127 07:04:55.284690 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-nvjgr_openshift-multus(8575a338-fc73-4413-ab05-0fdfdd6bdf2d)\"" pod="openshift-multus/multus-nvjgr" podUID="8575a338-fc73-4413-ab05-0fdfdd6bdf2d" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.287449 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovnkube-controller/3.log" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.290624 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovn-acl-logging/0.log" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.291125 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovn-controller/0.log" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.291789 4872 generic.go:334] "Generic (PLEG): container finished" podID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerID="8bc38bef1b1dcd10d87a35880278719bae48d0b4cf71a91c3224f4828b6ed04d" exitCode=0 Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.291808 4872 generic.go:334] "Generic (PLEG): container finished" podID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerID="2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475" exitCode=0 Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.291817 4872 generic.go:334] "Generic (PLEG): container finished" podID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerID="fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf" exitCode=0 Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.291823 4872 generic.go:334] "Generic (PLEG): container finished" podID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerID="943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1" exitCode=0 Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.291830 4872 generic.go:334] "Generic (PLEG): container finished" podID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerID="1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502" exitCode=0 Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.291880 4872 generic.go:334] "Generic (PLEG): container finished" podID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerID="283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564" exitCode=143 Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.291879 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerDied","Data":"8bc38bef1b1dcd10d87a35880278719bae48d0b4cf71a91c3224f4828b6ed04d"} Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.291916 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerDied","Data":"2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475"} Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.291931 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerDied","Data":"fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf"} Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.291953 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerDied","Data":"943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1"} Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.291965 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerDied","Data":"1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502"} Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.291976 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerDied","Data":"283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564"} Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.291987 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerDied","Data":"33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9"} Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.291887 4872 generic.go:334] "Generic (PLEG): container finished" podID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerID="33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9" exitCode=143 Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.335921 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovnkube-controller/3.log" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.338084 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovn-acl-logging/0.log" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.338557 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovn-controller/0.log" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.339071 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.339153 4872 scope.go:117] "RemoveContainer" containerID="d2c7b3a81b9d9a557be44ad04d8a4946c13cdd61a7984f8f9446b241b053e28c" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.398724 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qlqst"] Jan 27 07:04:55 crc kubenswrapper[4872]: E0127 07:04:55.399026 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovnkube-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399047 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovnkube-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: E0127 07:04:55.399056 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovnkube-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399066 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovnkube-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: E0127 07:04:55.399076 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovnkube-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399083 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovnkube-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: E0127 07:04:55.399095 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovn-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399102 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovn-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: E0127 07:04:55.399113 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399119 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 07:04:55 crc kubenswrapper[4872]: E0127 07:04:55.399149 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="sbdb" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399157 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="sbdb" Jan 27 07:04:55 crc kubenswrapper[4872]: E0127 07:04:55.399171 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="northd" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399178 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="northd" Jan 27 07:04:55 crc kubenswrapper[4872]: E0127 07:04:55.399187 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="kubecfg-setup" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399194 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="kubecfg-setup" Jan 27 07:04:55 crc kubenswrapper[4872]: E0127 07:04:55.399208 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovnkube-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399215 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovnkube-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: E0127 07:04:55.399224 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="kube-rbac-proxy-node" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399231 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="kube-rbac-proxy-node" Jan 27 07:04:55 crc kubenswrapper[4872]: E0127 07:04:55.399241 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovn-acl-logging" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399248 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovn-acl-logging" Jan 27 07:04:55 crc kubenswrapper[4872]: E0127 07:04:55.399256 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="nbdb" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399262 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="nbdb" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399374 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovn-acl-logging" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399385 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="sbdb" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399395 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="northd" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399407 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovnkube-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399414 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovn-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399422 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovnkube-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399430 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399440 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="kube-rbac-proxy-node" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399449 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="nbdb" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399460 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovnkube-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: E0127 07:04:55.399580 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovnkube-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399590 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovnkube-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399692 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovnkube-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.399919 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerName="ovnkube-controller" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.401500 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.449589 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-kubelet\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.449658 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-run-netns\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.449690 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-ovn\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.449718 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-cni-netd\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.449759 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovnkube-script-lib\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.449784 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-systemd-units\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.449811 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-openvswitch\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.449861 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-slash\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.449890 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovn-node-metrics-cert\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.449913 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovnkube-config\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.449940 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-node-log\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.449983 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-cni-bin\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450014 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-env-overrides\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450043 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xnln\" (UniqueName: \"kubernetes.io/projected/b62e2eec-d750-4b03-90a4-4082a5d8ca18-kube-api-access-9xnln\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450066 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450092 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-etc-openvswitch\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450113 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-var-lib-openvswitch\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450140 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-run-ovn-kubernetes\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450165 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-log-socket\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450189 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-systemd\") pod \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\" (UID: \"b62e2eec-d750-4b03-90a4-4082a5d8ca18\") " Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.449746 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450290 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.449779 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.449794 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450244 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450311 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450265 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450256 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450370 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450388 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-slash" (OuterVolumeSpecName: "host-slash") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450422 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450456 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450480 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-log-socket" (OuterVolumeSpecName: "log-socket") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.450573 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.451181 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.451464 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-node-log" (OuterVolumeSpecName: "node-log") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.451660 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.452269 4872 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.452374 4872 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.452466 4872 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.452540 4872 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.452613 4872 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.452708 4872 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.452787 4872 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.453063 4872 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.453158 4872 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.453225 4872 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.453311 4872 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.453369 4872 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.453426 4872 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.453490 4872 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.453556 4872 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.453642 4872 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b62e2eec-d750-4b03-90a4-4082a5d8ca18-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.453724 4872 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.467385 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b62e2eec-d750-4b03-90a4-4082a5d8ca18-kube-api-access-9xnln" (OuterVolumeSpecName: "kube-api-access-9xnln") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "kube-api-access-9xnln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.471087 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.483331 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b62e2eec-d750-4b03-90a4-4082a5d8ca18" (UID: "b62e2eec-d750-4b03-90a4-4082a5d8ca18"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.554643 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-run-netns\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.554993 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-run-openvswitch\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.555093 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-slash\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.555177 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-run-systemd\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.555303 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-log-socket\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.555430 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ee6810d-5593-4d69-b29b-5fe28af0de91-env-overrides\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.555551 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-var-lib-openvswitch\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.555686 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-cni-netd\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.555814 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-kubelet\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.555925 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ee6810d-5593-4d69-b29b-5fe28af0de91-ovnkube-script-lib\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.555960 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-cni-bin\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.555988 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ee6810d-5593-4d69-b29b-5fe28af0de91-ovn-node-metrics-cert\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.556011 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-systemd-units\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.556027 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-run-ovn-kubernetes\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.556042 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.556061 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-node-log\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.556082 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-etc-openvswitch\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.556105 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmgzc\" (UniqueName: \"kubernetes.io/projected/0ee6810d-5593-4d69-b29b-5fe28af0de91-kube-api-access-fmgzc\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.556124 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ee6810d-5593-4d69-b29b-5fe28af0de91-ovnkube-config\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.556169 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-run-ovn\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.556224 4872 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b62e2eec-d750-4b03-90a4-4082a5d8ca18-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.556237 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xnln\" (UniqueName: \"kubernetes.io/projected/b62e2eec-d750-4b03-90a4-4082a5d8ca18-kube-api-access-9xnln\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.556246 4872 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b62e2eec-d750-4b03-90a4-4082a5d8ca18-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.657580 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-cni-netd\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658171 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-kubelet\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658328 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-kubelet\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658344 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ee6810d-5593-4d69-b29b-5fe28af0de91-ovnkube-script-lib\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.657719 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-cni-netd\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658421 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-cni-bin\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658459 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ee6810d-5593-4d69-b29b-5fe28af0de91-ovn-node-metrics-cert\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658489 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-systemd-units\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658506 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-run-ovn-kubernetes\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658519 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-cni-bin\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658524 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658544 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658556 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-node-log\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658577 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-etc-openvswitch\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658596 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmgzc\" (UniqueName: \"kubernetes.io/projected/0ee6810d-5593-4d69-b29b-5fe28af0de91-kube-api-access-fmgzc\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658614 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ee6810d-5593-4d69-b29b-5fe28af0de91-ovnkube-config\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658655 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-run-ovn\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658706 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-run-netns\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658732 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-run-openvswitch\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658750 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-slash\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658770 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-run-systemd\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658825 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-log-socket\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658861 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ee6810d-5593-4d69-b29b-5fe28af0de91-env-overrides\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658880 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-var-lib-openvswitch\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658936 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-var-lib-openvswitch\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658958 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-node-log\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.658977 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-etc-openvswitch\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.659085 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-run-openvswitch\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.659360 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-slash\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.659397 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-run-systemd\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.659477 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-log-socket\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.659549 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-run-ovn\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.659579 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-systemd-units\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.659613 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-run-ovn-kubernetes\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.659642 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ee6810d-5593-4d69-b29b-5fe28af0de91-host-run-netns\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.659673 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ee6810d-5593-4d69-b29b-5fe28af0de91-ovnkube-config\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.659894 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ee6810d-5593-4d69-b29b-5fe28af0de91-env-overrides\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.660688 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ee6810d-5593-4d69-b29b-5fe28af0de91-ovnkube-script-lib\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.662521 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ee6810d-5593-4d69-b29b-5fe28af0de91-ovn-node-metrics-cert\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.682829 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmgzc\" (UniqueName: \"kubernetes.io/projected/0ee6810d-5593-4d69-b29b-5fe28af0de91-kube-api-access-fmgzc\") pod \"ovnkube-node-qlqst\" (UID: \"0ee6810d-5593-4d69-b29b-5fe28af0de91\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.715055 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:04:55 crc kubenswrapper[4872]: I0127 07:04:55.891989 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-l6kbm" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.297719 4872 generic.go:334] "Generic (PLEG): container finished" podID="0ee6810d-5593-4d69-b29b-5fe28af0de91" containerID="7b543b8240288c92526eb647fd90faba07a6b914eecf710950a7b194062b2b1a" exitCode=0 Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.297780 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" event={"ID":"0ee6810d-5593-4d69-b29b-5fe28af0de91","Type":"ContainerDied","Data":"7b543b8240288c92526eb647fd90faba07a6b914eecf710950a7b194062b2b1a"} Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.297808 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" event={"ID":"0ee6810d-5593-4d69-b29b-5fe28af0de91","Type":"ContainerStarted","Data":"6ce34f0914e15bdca76a563a02446ac3991e71d3caa28b1aacbf2c020f9f748c"} Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.302552 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovn-acl-logging/0.log" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.303023 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ww8p7_b62e2eec-d750-4b03-90a4-4082a5d8ca18/ovn-controller/0.log" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.303377 4872 generic.go:334] "Generic (PLEG): container finished" podID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" containerID="d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1" exitCode=0 Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.303464 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerDied","Data":"d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1"} Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.303486 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" event={"ID":"b62e2eec-d750-4b03-90a4-4082a5d8ca18","Type":"ContainerDied","Data":"de5353229071b7ca67ee4936300b203fa68d3671141d3d704156082a85aab624"} Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.303504 4872 scope.go:117] "RemoveContainer" containerID="8bc38bef1b1dcd10d87a35880278719bae48d0b4cf71a91c3224f4828b6ed04d" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.303507 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ww8p7" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.306806 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nvjgr_8575a338-fc73-4413-ab05-0fdfdd6bdf2d/kube-multus/2.log" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.332041 4872 scope.go:117] "RemoveContainer" containerID="2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.345697 4872 scope.go:117] "RemoveContainer" containerID="d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.382461 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ww8p7"] Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.385028 4872 scope.go:117] "RemoveContainer" containerID="fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.386146 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ww8p7"] Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.399607 4872 scope.go:117] "RemoveContainer" containerID="943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.411964 4872 scope.go:117] "RemoveContainer" containerID="1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.425010 4872 scope.go:117] "RemoveContainer" containerID="283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.437194 4872 scope.go:117] "RemoveContainer" containerID="33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.453722 4872 scope.go:117] "RemoveContainer" containerID="a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.535604 4872 scope.go:117] "RemoveContainer" containerID="8bc38bef1b1dcd10d87a35880278719bae48d0b4cf71a91c3224f4828b6ed04d" Jan 27 07:04:56 crc kubenswrapper[4872]: E0127 07:04:56.536124 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bc38bef1b1dcd10d87a35880278719bae48d0b4cf71a91c3224f4828b6ed04d\": container with ID starting with 8bc38bef1b1dcd10d87a35880278719bae48d0b4cf71a91c3224f4828b6ed04d not found: ID does not exist" containerID="8bc38bef1b1dcd10d87a35880278719bae48d0b4cf71a91c3224f4828b6ed04d" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.536215 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bc38bef1b1dcd10d87a35880278719bae48d0b4cf71a91c3224f4828b6ed04d"} err="failed to get container status \"8bc38bef1b1dcd10d87a35880278719bae48d0b4cf71a91c3224f4828b6ed04d\": rpc error: code = NotFound desc = could not find container \"8bc38bef1b1dcd10d87a35880278719bae48d0b4cf71a91c3224f4828b6ed04d\": container with ID starting with 8bc38bef1b1dcd10d87a35880278719bae48d0b4cf71a91c3224f4828b6ed04d not found: ID does not exist" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.536242 4872 scope.go:117] "RemoveContainer" containerID="2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475" Jan 27 07:04:56 crc kubenswrapper[4872]: E0127 07:04:56.536552 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\": container with ID starting with 2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475 not found: ID does not exist" containerID="2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.536580 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475"} err="failed to get container status \"2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\": rpc error: code = NotFound desc = could not find container \"2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475\": container with ID starting with 2a89eebd19a6dfe868ed7b755ce7567085a92f09fa8d09e944dc799d6c064475 not found: ID does not exist" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.536598 4872 scope.go:117] "RemoveContainer" containerID="d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1" Jan 27 07:04:56 crc kubenswrapper[4872]: E0127 07:04:56.537272 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\": container with ID starting with d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1 not found: ID does not exist" containerID="d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.537295 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1"} err="failed to get container status \"d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\": rpc error: code = NotFound desc = could not find container \"d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1\": container with ID starting with d31e369edd9faea07ccf9e476322488c670b19c05015e50fe9eef27c9aac04a1 not found: ID does not exist" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.537308 4872 scope.go:117] "RemoveContainer" containerID="fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf" Jan 27 07:04:56 crc kubenswrapper[4872]: E0127 07:04:56.537660 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\": container with ID starting with fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf not found: ID does not exist" containerID="fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.537689 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf"} err="failed to get container status \"fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\": rpc error: code = NotFound desc = could not find container \"fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf\": container with ID starting with fd067bd3638d0cef08bfc0da32d579d1411e9958861384c136fe515b1cb28aaf not found: ID does not exist" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.537710 4872 scope.go:117] "RemoveContainer" containerID="943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1" Jan 27 07:04:56 crc kubenswrapper[4872]: E0127 07:04:56.538067 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\": container with ID starting with 943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1 not found: ID does not exist" containerID="943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.538091 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1"} err="failed to get container status \"943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\": rpc error: code = NotFound desc = could not find container \"943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1\": container with ID starting with 943f793e14eeef2ee3914a3b5ae5584ceba567a267e3e6b18c7bbee4d6cbd0f1 not found: ID does not exist" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.538104 4872 scope.go:117] "RemoveContainer" containerID="1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502" Jan 27 07:04:56 crc kubenswrapper[4872]: E0127 07:04:56.539243 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\": container with ID starting with 1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502 not found: ID does not exist" containerID="1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.539267 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502"} err="failed to get container status \"1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\": rpc error: code = NotFound desc = could not find container \"1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502\": container with ID starting with 1bd643d604e611634ba446714a458aa241244fd26184f67f2acdd1ff1fd7f502 not found: ID does not exist" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.539284 4872 scope.go:117] "RemoveContainer" containerID="283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564" Jan 27 07:04:56 crc kubenswrapper[4872]: E0127 07:04:56.539512 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\": container with ID starting with 283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564 not found: ID does not exist" containerID="283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.539540 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564"} err="failed to get container status \"283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\": rpc error: code = NotFound desc = could not find container \"283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564\": container with ID starting with 283c601e6af95d2509196487706059e0ced589107a3dfe20e11ff6261d211564 not found: ID does not exist" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.539558 4872 scope.go:117] "RemoveContainer" containerID="33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9" Jan 27 07:04:56 crc kubenswrapper[4872]: E0127 07:04:56.539774 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\": container with ID starting with 33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9 not found: ID does not exist" containerID="33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.539803 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9"} err="failed to get container status \"33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\": rpc error: code = NotFound desc = could not find container \"33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9\": container with ID starting with 33a0c4b0c8fafa195f448286c0166221f776b7f8d0d8cf81bdbb19715080c3b9 not found: ID does not exist" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.539822 4872 scope.go:117] "RemoveContainer" containerID="a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c" Jan 27 07:04:56 crc kubenswrapper[4872]: E0127 07:04:56.540070 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\": container with ID starting with a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c not found: ID does not exist" containerID="a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c" Jan 27 07:04:56 crc kubenswrapper[4872]: I0127 07:04:56.540096 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c"} err="failed to get container status \"a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\": rpc error: code = NotFound desc = could not find container \"a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c\": container with ID starting with a668c2bfe78bcb7c3affeeff13ebc6d4ce9543daf087f9ba40793f3cefd5ff4c not found: ID does not exist" Jan 27 07:04:57 crc kubenswrapper[4872]: I0127 07:04:57.314474 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" event={"ID":"0ee6810d-5593-4d69-b29b-5fe28af0de91","Type":"ContainerStarted","Data":"095d36102b7e3ff61ab74087879e8fd5369f300ab5c9fa0b60e69cd04f9655f4"} Jan 27 07:04:57 crc kubenswrapper[4872]: I0127 07:04:57.315515 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" event={"ID":"0ee6810d-5593-4d69-b29b-5fe28af0de91","Type":"ContainerStarted","Data":"b65d9d6325395e1bae6d60d4222de27f2b995cff57e49592bfe988fdab60fb37"} Jan 27 07:04:57 crc kubenswrapper[4872]: I0127 07:04:57.315590 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" event={"ID":"0ee6810d-5593-4d69-b29b-5fe28af0de91","Type":"ContainerStarted","Data":"90f0ef278f3207a137bef31f182433d756ae14640906404e88978e9237af3a4f"} Jan 27 07:04:57 crc kubenswrapper[4872]: I0127 07:04:57.315653 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" event={"ID":"0ee6810d-5593-4d69-b29b-5fe28af0de91","Type":"ContainerStarted","Data":"62f80c9d52f88ac041d1dc1af74614ac1aa12a6b24dfa5f8b096e2e12afed382"} Jan 27 07:04:57 crc kubenswrapper[4872]: I0127 07:04:57.315725 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" event={"ID":"0ee6810d-5593-4d69-b29b-5fe28af0de91","Type":"ContainerStarted","Data":"97fb9bb9cfc7fbc4c7cb5d067422b0f8aa2df994861063b559a8d917d0f22614"} Jan 27 07:04:57 crc kubenswrapper[4872]: I0127 07:04:57.315793 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" event={"ID":"0ee6810d-5593-4d69-b29b-5fe28af0de91","Type":"ContainerStarted","Data":"01d4ee6b05f2a2384b67281131f9b069d24d8c05b21dd3e032886502e1a2150c"} Jan 27 07:04:58 crc kubenswrapper[4872]: I0127 07:04:58.104257 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b62e2eec-d750-4b03-90a4-4082a5d8ca18" path="/var/lib/kubelet/pods/b62e2eec-d750-4b03-90a4-4082a5d8ca18/volumes" Jan 27 07:04:59 crc kubenswrapper[4872]: I0127 07:04:59.331671 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" event={"ID":"0ee6810d-5593-4d69-b29b-5fe28af0de91","Type":"ContainerStarted","Data":"1fe458d24e73b8d5f151f94f6daa7b31234f3dd1769e7bf5b21fcc29ffafe943"} Jan 27 07:05:01 crc kubenswrapper[4872]: I0127 07:05:01.345430 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" event={"ID":"0ee6810d-5593-4d69-b29b-5fe28af0de91","Type":"ContainerStarted","Data":"7662feaa60e02df4d586ec5d0aafc63c366617c506b6fdd5ee52b7e1bdb6e46d"} Jan 27 07:05:01 crc kubenswrapper[4872]: I0127 07:05:01.345962 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:05:01 crc kubenswrapper[4872]: I0127 07:05:01.345976 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:05:01 crc kubenswrapper[4872]: I0127 07:05:01.345984 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:05:01 crc kubenswrapper[4872]: I0127 07:05:01.376243 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" podStartSLOduration=6.376223128 podStartE2EDuration="6.376223128s" podCreationTimestamp="2026-01-27 07:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:05:01.372112157 +0000 UTC m=+677.899587373" watchObservedRunningTime="2026-01-27 07:05:01.376223128 +0000 UTC m=+677.903698314" Jan 27 07:05:01 crc kubenswrapper[4872]: I0127 07:05:01.380769 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:05:01 crc kubenswrapper[4872]: I0127 07:05:01.381633 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:05:10 crc kubenswrapper[4872]: I0127 07:05:10.098257 4872 scope.go:117] "RemoveContainer" containerID="9d582de30cd4995904c662993a12d753218a97f46976ef04e2bdb771d2ac83bb" Jan 27 07:05:10 crc kubenswrapper[4872]: E0127 07:05:10.098903 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-nvjgr_openshift-multus(8575a338-fc73-4413-ab05-0fdfdd6bdf2d)\"" pod="openshift-multus/multus-nvjgr" podUID="8575a338-fc73-4413-ab05-0fdfdd6bdf2d" Jan 27 07:05:25 crc kubenswrapper[4872]: I0127 07:05:25.001405 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:05:25 crc kubenswrapper[4872]: I0127 07:05:25.001804 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:05:25 crc kubenswrapper[4872]: I0127 07:05:25.098557 4872 scope.go:117] "RemoveContainer" containerID="9d582de30cd4995904c662993a12d753218a97f46976ef04e2bdb771d2ac83bb" Jan 27 07:05:25 crc kubenswrapper[4872]: I0127 07:05:25.460290 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nvjgr_8575a338-fc73-4413-ab05-0fdfdd6bdf2d/kube-multus/2.log" Jan 27 07:05:25 crc kubenswrapper[4872]: I0127 07:05:25.460649 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nvjgr" event={"ID":"8575a338-fc73-4413-ab05-0fdfdd6bdf2d","Type":"ContainerStarted","Data":"da1d42640f6697021c7938327e659f35e984819d8a20b36363d05242192583a6"} Jan 27 07:05:25 crc kubenswrapper[4872]: I0127 07:05:25.736530 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qlqst" Jan 27 07:05:37 crc kubenswrapper[4872]: I0127 07:05:37.270275 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg"] Jan 27 07:05:37 crc kubenswrapper[4872]: I0127 07:05:37.273328 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" Jan 27 07:05:37 crc kubenswrapper[4872]: I0127 07:05:37.292925 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 07:05:37 crc kubenswrapper[4872]: I0127 07:05:37.301026 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg"] Jan 27 07:05:37 crc kubenswrapper[4872]: I0127 07:05:37.392724 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxjxb\" (UniqueName: \"kubernetes.io/projected/59de723c-44ad-4f00-b0d8-e04fee093418-kube-api-access-qxjxb\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg\" (UID: \"59de723c-44ad-4f00-b0d8-e04fee093418\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" Jan 27 07:05:37 crc kubenswrapper[4872]: I0127 07:05:37.393112 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59de723c-44ad-4f00-b0d8-e04fee093418-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg\" (UID: \"59de723c-44ad-4f00-b0d8-e04fee093418\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" Jan 27 07:05:37 crc kubenswrapper[4872]: I0127 07:05:37.393182 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59de723c-44ad-4f00-b0d8-e04fee093418-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg\" (UID: \"59de723c-44ad-4f00-b0d8-e04fee093418\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" Jan 27 07:05:37 crc kubenswrapper[4872]: I0127 07:05:37.494614 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxjxb\" (UniqueName: \"kubernetes.io/projected/59de723c-44ad-4f00-b0d8-e04fee093418-kube-api-access-qxjxb\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg\" (UID: \"59de723c-44ad-4f00-b0d8-e04fee093418\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" Jan 27 07:05:37 crc kubenswrapper[4872]: I0127 07:05:37.495224 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59de723c-44ad-4f00-b0d8-e04fee093418-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg\" (UID: \"59de723c-44ad-4f00-b0d8-e04fee093418\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" Jan 27 07:05:37 crc kubenswrapper[4872]: I0127 07:05:37.495775 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59de723c-44ad-4f00-b0d8-e04fee093418-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg\" (UID: \"59de723c-44ad-4f00-b0d8-e04fee093418\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" Jan 27 07:05:37 crc kubenswrapper[4872]: I0127 07:05:37.495670 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59de723c-44ad-4f00-b0d8-e04fee093418-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg\" (UID: \"59de723c-44ad-4f00-b0d8-e04fee093418\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" Jan 27 07:05:37 crc kubenswrapper[4872]: I0127 07:05:37.496078 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59de723c-44ad-4f00-b0d8-e04fee093418-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg\" (UID: \"59de723c-44ad-4f00-b0d8-e04fee093418\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" Jan 27 07:05:37 crc kubenswrapper[4872]: I0127 07:05:37.513636 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxjxb\" (UniqueName: \"kubernetes.io/projected/59de723c-44ad-4f00-b0d8-e04fee093418-kube-api-access-qxjxb\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg\" (UID: \"59de723c-44ad-4f00-b0d8-e04fee093418\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" Jan 27 07:05:37 crc kubenswrapper[4872]: I0127 07:05:37.613881 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" Jan 27 07:05:37 crc kubenswrapper[4872]: I0127 07:05:37.785715 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg"] Jan 27 07:05:38 crc kubenswrapper[4872]: I0127 07:05:38.541955 4872 generic.go:334] "Generic (PLEG): container finished" podID="59de723c-44ad-4f00-b0d8-e04fee093418" containerID="df1b36fac54c64c0493cac2b72f3495e8cc5b4fa58a5fa5cc6a08255ce9842b9" exitCode=0 Jan 27 07:05:38 crc kubenswrapper[4872]: I0127 07:05:38.541993 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" event={"ID":"59de723c-44ad-4f00-b0d8-e04fee093418","Type":"ContainerDied","Data":"df1b36fac54c64c0493cac2b72f3495e8cc5b4fa58a5fa5cc6a08255ce9842b9"} Jan 27 07:05:38 crc kubenswrapper[4872]: I0127 07:05:38.542017 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" event={"ID":"59de723c-44ad-4f00-b0d8-e04fee093418","Type":"ContainerStarted","Data":"7efa76498b7e10fc49569105d94d34edb34248abe13d0b48e097640620892378"} Jan 27 07:05:40 crc kubenswrapper[4872]: I0127 07:05:40.553701 4872 generic.go:334] "Generic (PLEG): container finished" podID="59de723c-44ad-4f00-b0d8-e04fee093418" containerID="701ac14e4f5bd2913e411f270a91015ef6e0740ece98649166dfbf0f70a8ffca" exitCode=0 Jan 27 07:05:40 crc kubenswrapper[4872]: I0127 07:05:40.553792 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" event={"ID":"59de723c-44ad-4f00-b0d8-e04fee093418","Type":"ContainerDied","Data":"701ac14e4f5bd2913e411f270a91015ef6e0740ece98649166dfbf0f70a8ffca"} Jan 27 07:05:41 crc kubenswrapper[4872]: I0127 07:05:41.562495 4872 generic.go:334] "Generic (PLEG): container finished" podID="59de723c-44ad-4f00-b0d8-e04fee093418" containerID="76b3499a86a9c71dfcddcc3f10d9d81757fe02a94f99443a9e59deec9119abc4" exitCode=0 Jan 27 07:05:41 crc kubenswrapper[4872]: I0127 07:05:41.562583 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" event={"ID":"59de723c-44ad-4f00-b0d8-e04fee093418","Type":"ContainerDied","Data":"76b3499a86a9c71dfcddcc3f10d9d81757fe02a94f99443a9e59deec9119abc4"} Jan 27 07:05:42 crc kubenswrapper[4872]: I0127 07:05:42.782216 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" Jan 27 07:05:42 crc kubenswrapper[4872]: I0127 07:05:42.960291 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59de723c-44ad-4f00-b0d8-e04fee093418-bundle\") pod \"59de723c-44ad-4f00-b0d8-e04fee093418\" (UID: \"59de723c-44ad-4f00-b0d8-e04fee093418\") " Jan 27 07:05:42 crc kubenswrapper[4872]: I0127 07:05:42.960335 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59de723c-44ad-4f00-b0d8-e04fee093418-util\") pod \"59de723c-44ad-4f00-b0d8-e04fee093418\" (UID: \"59de723c-44ad-4f00-b0d8-e04fee093418\") " Jan 27 07:05:42 crc kubenswrapper[4872]: I0127 07:05:42.960403 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxjxb\" (UniqueName: \"kubernetes.io/projected/59de723c-44ad-4f00-b0d8-e04fee093418-kube-api-access-qxjxb\") pod \"59de723c-44ad-4f00-b0d8-e04fee093418\" (UID: \"59de723c-44ad-4f00-b0d8-e04fee093418\") " Jan 27 07:05:42 crc kubenswrapper[4872]: I0127 07:05:42.961344 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59de723c-44ad-4f00-b0d8-e04fee093418-bundle" (OuterVolumeSpecName: "bundle") pod "59de723c-44ad-4f00-b0d8-e04fee093418" (UID: "59de723c-44ad-4f00-b0d8-e04fee093418"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:05:42 crc kubenswrapper[4872]: I0127 07:05:42.966739 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59de723c-44ad-4f00-b0d8-e04fee093418-kube-api-access-qxjxb" (OuterVolumeSpecName: "kube-api-access-qxjxb") pod "59de723c-44ad-4f00-b0d8-e04fee093418" (UID: "59de723c-44ad-4f00-b0d8-e04fee093418"). InnerVolumeSpecName "kube-api-access-qxjxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:05:42 crc kubenswrapper[4872]: I0127 07:05:42.976728 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59de723c-44ad-4f00-b0d8-e04fee093418-util" (OuterVolumeSpecName: "util") pod "59de723c-44ad-4f00-b0d8-e04fee093418" (UID: "59de723c-44ad-4f00-b0d8-e04fee093418"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:05:43 crc kubenswrapper[4872]: I0127 07:05:43.062279 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxjxb\" (UniqueName: \"kubernetes.io/projected/59de723c-44ad-4f00-b0d8-e04fee093418-kube-api-access-qxjxb\") on node \"crc\" DevicePath \"\"" Jan 27 07:05:43 crc kubenswrapper[4872]: I0127 07:05:43.062316 4872 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59de723c-44ad-4f00-b0d8-e04fee093418-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:05:43 crc kubenswrapper[4872]: I0127 07:05:43.062325 4872 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59de723c-44ad-4f00-b0d8-e04fee093418-util\") on node \"crc\" DevicePath \"\"" Jan 27 07:05:43 crc kubenswrapper[4872]: I0127 07:05:43.574594 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" event={"ID":"59de723c-44ad-4f00-b0d8-e04fee093418","Type":"ContainerDied","Data":"7efa76498b7e10fc49569105d94d34edb34248abe13d0b48e097640620892378"} Jan 27 07:05:43 crc kubenswrapper[4872]: I0127 07:05:43.574643 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7efa76498b7e10fc49569105d94d34edb34248abe13d0b48e097640620892378" Jan 27 07:05:43 crc kubenswrapper[4872]: I0127 07:05:43.574716 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg" Jan 27 07:05:48 crc kubenswrapper[4872]: I0127 07:05:48.927423 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-j8rtk"] Jan 27 07:05:48 crc kubenswrapper[4872]: E0127 07:05:48.928103 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59de723c-44ad-4f00-b0d8-e04fee093418" containerName="extract" Jan 27 07:05:48 crc kubenswrapper[4872]: I0127 07:05:48.928117 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="59de723c-44ad-4f00-b0d8-e04fee093418" containerName="extract" Jan 27 07:05:48 crc kubenswrapper[4872]: E0127 07:05:48.928136 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59de723c-44ad-4f00-b0d8-e04fee093418" containerName="pull" Jan 27 07:05:48 crc kubenswrapper[4872]: I0127 07:05:48.928143 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="59de723c-44ad-4f00-b0d8-e04fee093418" containerName="pull" Jan 27 07:05:48 crc kubenswrapper[4872]: E0127 07:05:48.928156 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59de723c-44ad-4f00-b0d8-e04fee093418" containerName="util" Jan 27 07:05:48 crc kubenswrapper[4872]: I0127 07:05:48.928164 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="59de723c-44ad-4f00-b0d8-e04fee093418" containerName="util" Jan 27 07:05:48 crc kubenswrapper[4872]: I0127 07:05:48.928271 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="59de723c-44ad-4f00-b0d8-e04fee093418" containerName="extract" Jan 27 07:05:48 crc kubenswrapper[4872]: I0127 07:05:48.928640 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-j8rtk" Jan 27 07:05:48 crc kubenswrapper[4872]: I0127 07:05:48.934542 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-dtpm6" Jan 27 07:05:48 crc kubenswrapper[4872]: I0127 07:05:48.934602 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 07:05:48 crc kubenswrapper[4872]: I0127 07:05:48.935058 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 07:05:48 crc kubenswrapper[4872]: I0127 07:05:48.936253 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8c2g\" (UniqueName: \"kubernetes.io/projected/e1b87711-10f9-4c88-966a-6229cf79f03a-kube-api-access-j8c2g\") pod \"nmstate-operator-646758c888-j8rtk\" (UID: \"e1b87711-10f9-4c88-966a-6229cf79f03a\") " pod="openshift-nmstate/nmstate-operator-646758c888-j8rtk" Jan 27 07:05:48 crc kubenswrapper[4872]: I0127 07:05:48.940392 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-j8rtk"] Jan 27 07:05:49 crc kubenswrapper[4872]: I0127 07:05:49.037558 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8c2g\" (UniqueName: \"kubernetes.io/projected/e1b87711-10f9-4c88-966a-6229cf79f03a-kube-api-access-j8c2g\") pod \"nmstate-operator-646758c888-j8rtk\" (UID: \"e1b87711-10f9-4c88-966a-6229cf79f03a\") " pod="openshift-nmstate/nmstate-operator-646758c888-j8rtk" Jan 27 07:05:49 crc kubenswrapper[4872]: I0127 07:05:49.057712 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8c2g\" (UniqueName: \"kubernetes.io/projected/e1b87711-10f9-4c88-966a-6229cf79f03a-kube-api-access-j8c2g\") pod \"nmstate-operator-646758c888-j8rtk\" (UID: \"e1b87711-10f9-4c88-966a-6229cf79f03a\") " pod="openshift-nmstate/nmstate-operator-646758c888-j8rtk" Jan 27 07:05:49 crc kubenswrapper[4872]: I0127 07:05:49.301149 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-j8rtk" Jan 27 07:05:49 crc kubenswrapper[4872]: I0127 07:05:49.700217 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-j8rtk"] Jan 27 07:05:50 crc kubenswrapper[4872]: I0127 07:05:50.606026 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-j8rtk" event={"ID":"e1b87711-10f9-4c88-966a-6229cf79f03a","Type":"ContainerStarted","Data":"eb3b94a179dc65e9b8851721bc6945ef7fa026d23ec710784b361cdb962b098f"} Jan 27 07:05:55 crc kubenswrapper[4872]: I0127 07:05:55.001628 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:05:55 crc kubenswrapper[4872]: I0127 07:05:55.001973 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:05:56 crc kubenswrapper[4872]: I0127 07:05:56.636392 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-j8rtk" event={"ID":"e1b87711-10f9-4c88-966a-6229cf79f03a","Type":"ContainerStarted","Data":"22ea365779dd22e2fe4b952a534d463b575c714f881312b7606905367fcfec3e"} Jan 27 07:05:56 crc kubenswrapper[4872]: I0127 07:05:56.654510 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-j8rtk" podStartSLOduration=1.942078468 podStartE2EDuration="8.654484467s" podCreationTimestamp="2026-01-27 07:05:48 +0000 UTC" firstStartedPulling="2026-01-27 07:05:49.727623071 +0000 UTC m=+726.255098277" lastFinishedPulling="2026-01-27 07:05:56.44002908 +0000 UTC m=+732.967504276" observedRunningTime="2026-01-27 07:05:56.649450441 +0000 UTC m=+733.176925637" watchObservedRunningTime="2026-01-27 07:05:56.654484467 +0000 UTC m=+733.181959663" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.723951 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-lgvbb"] Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.724974 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-lgvbb" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.733706 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-lgvbb"] Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.734131 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-fvs9d" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.737213 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p98x\" (UniqueName: \"kubernetes.io/projected/630c7ad6-ffad-412c-8e81-c674d0a64558-kube-api-access-2p98x\") pod \"nmstate-metrics-54757c584b-lgvbb\" (UID: \"630c7ad6-ffad-412c-8e81-c674d0a64558\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-lgvbb" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.762479 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-f4ltq"] Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.763331 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f4ltq" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.768788 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.804191 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-f4ltq"] Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.804435 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wcbvw"] Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.805106 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wcbvw" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.838249 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p98x\" (UniqueName: \"kubernetes.io/projected/630c7ad6-ffad-412c-8e81-c674d0a64558-kube-api-access-2p98x\") pod \"nmstate-metrics-54757c584b-lgvbb\" (UID: \"630c7ad6-ffad-412c-8e81-c674d0a64558\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-lgvbb" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.857412 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p98x\" (UniqueName: \"kubernetes.io/projected/630c7ad6-ffad-412c-8e81-c674d0a64558-kube-api-access-2p98x\") pod \"nmstate-metrics-54757c584b-lgvbb\" (UID: \"630c7ad6-ffad-412c-8e81-c674d0a64558\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-lgvbb" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.914464 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z"] Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.915295 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.917562 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.917744 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.918419 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-5qwcn" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.929029 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z"] Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.939685 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a9bd09ec-0fa1-4d66-9df7-86950125ea55-dbus-socket\") pod \"nmstate-handler-wcbvw\" (UID: \"a9bd09ec-0fa1-4d66-9df7-86950125ea55\") " pod="openshift-nmstate/nmstate-handler-wcbvw" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.940122 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fknr\" (UniqueName: \"kubernetes.io/projected/a9bd09ec-0fa1-4d66-9df7-86950125ea55-kube-api-access-8fknr\") pod \"nmstate-handler-wcbvw\" (UID: \"a9bd09ec-0fa1-4d66-9df7-86950125ea55\") " pod="openshift-nmstate/nmstate-handler-wcbvw" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.940159 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t4tn\" (UniqueName: \"kubernetes.io/projected/dc5cf68b-3227-4b9d-aaf9-4562e622e0a0-kube-api-access-2t4tn\") pod \"nmstate-webhook-8474b5b9d8-f4ltq\" (UID: \"dc5cf68b-3227-4b9d-aaf9-4562e622e0a0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f4ltq" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.940186 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a9bd09ec-0fa1-4d66-9df7-86950125ea55-nmstate-lock\") pod \"nmstate-handler-wcbvw\" (UID: \"a9bd09ec-0fa1-4d66-9df7-86950125ea55\") " pod="openshift-nmstate/nmstate-handler-wcbvw" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.940237 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a9bd09ec-0fa1-4d66-9df7-86950125ea55-ovs-socket\") pod \"nmstate-handler-wcbvw\" (UID: \"a9bd09ec-0fa1-4d66-9df7-86950125ea55\") " pod="openshift-nmstate/nmstate-handler-wcbvw" Jan 27 07:05:57 crc kubenswrapper[4872]: I0127 07:05:57.940278 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dc5cf68b-3227-4b9d-aaf9-4562e622e0a0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f4ltq\" (UID: \"dc5cf68b-3227-4b9d-aaf9-4562e622e0a0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f4ltq" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.041478 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1258b368-7cb6-4e54-9def-f4e379f44f4d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-q5l6z\" (UID: \"1258b368-7cb6-4e54-9def-f4e379f44f4d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.041549 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a9bd09ec-0fa1-4d66-9df7-86950125ea55-dbus-socket\") pod \"nmstate-handler-wcbvw\" (UID: \"a9bd09ec-0fa1-4d66-9df7-86950125ea55\") " pod="openshift-nmstate/nmstate-handler-wcbvw" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.041568 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fknr\" (UniqueName: \"kubernetes.io/projected/a9bd09ec-0fa1-4d66-9df7-86950125ea55-kube-api-access-8fknr\") pod \"nmstate-handler-wcbvw\" (UID: \"a9bd09ec-0fa1-4d66-9df7-86950125ea55\") " pod="openshift-nmstate/nmstate-handler-wcbvw" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.041584 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1258b368-7cb6-4e54-9def-f4e379f44f4d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-q5l6z\" (UID: \"1258b368-7cb6-4e54-9def-f4e379f44f4d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.041605 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t4tn\" (UniqueName: \"kubernetes.io/projected/dc5cf68b-3227-4b9d-aaf9-4562e622e0a0-kube-api-access-2t4tn\") pod \"nmstate-webhook-8474b5b9d8-f4ltq\" (UID: \"dc5cf68b-3227-4b9d-aaf9-4562e622e0a0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f4ltq" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.041618 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a9bd09ec-0fa1-4d66-9df7-86950125ea55-nmstate-lock\") pod \"nmstate-handler-wcbvw\" (UID: \"a9bd09ec-0fa1-4d66-9df7-86950125ea55\") " pod="openshift-nmstate/nmstate-handler-wcbvw" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.041650 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpfj8\" (UniqueName: \"kubernetes.io/projected/1258b368-7cb6-4e54-9def-f4e379f44f4d-kube-api-access-fpfj8\") pod \"nmstate-console-plugin-7754f76f8b-q5l6z\" (UID: \"1258b368-7cb6-4e54-9def-f4e379f44f4d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.041671 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a9bd09ec-0fa1-4d66-9df7-86950125ea55-ovs-socket\") pod \"nmstate-handler-wcbvw\" (UID: \"a9bd09ec-0fa1-4d66-9df7-86950125ea55\") " pod="openshift-nmstate/nmstate-handler-wcbvw" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.041689 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dc5cf68b-3227-4b9d-aaf9-4562e622e0a0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f4ltq\" (UID: \"dc5cf68b-3227-4b9d-aaf9-4562e622e0a0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f4ltq" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.042459 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a9bd09ec-0fa1-4d66-9df7-86950125ea55-nmstate-lock\") pod \"nmstate-handler-wcbvw\" (UID: \"a9bd09ec-0fa1-4d66-9df7-86950125ea55\") " pod="openshift-nmstate/nmstate-handler-wcbvw" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.042479 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a9bd09ec-0fa1-4d66-9df7-86950125ea55-dbus-socket\") pod \"nmstate-handler-wcbvw\" (UID: \"a9bd09ec-0fa1-4d66-9df7-86950125ea55\") " pod="openshift-nmstate/nmstate-handler-wcbvw" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.042528 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a9bd09ec-0fa1-4d66-9df7-86950125ea55-ovs-socket\") pod \"nmstate-handler-wcbvw\" (UID: \"a9bd09ec-0fa1-4d66-9df7-86950125ea55\") " pod="openshift-nmstate/nmstate-handler-wcbvw" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.042909 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-lgvbb" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.045720 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dc5cf68b-3227-4b9d-aaf9-4562e622e0a0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f4ltq\" (UID: \"dc5cf68b-3227-4b9d-aaf9-4562e622e0a0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f4ltq" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.066170 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fknr\" (UniqueName: \"kubernetes.io/projected/a9bd09ec-0fa1-4d66-9df7-86950125ea55-kube-api-access-8fknr\") pod \"nmstate-handler-wcbvw\" (UID: \"a9bd09ec-0fa1-4d66-9df7-86950125ea55\") " pod="openshift-nmstate/nmstate-handler-wcbvw" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.073258 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t4tn\" (UniqueName: \"kubernetes.io/projected/dc5cf68b-3227-4b9d-aaf9-4562e622e0a0-kube-api-access-2t4tn\") pod \"nmstate-webhook-8474b5b9d8-f4ltq\" (UID: \"dc5cf68b-3227-4b9d-aaf9-4562e622e0a0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f4ltq" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.078875 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f4ltq" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.128305 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wcbvw" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.135472 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-bbcc9b596-wb45r"] Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.136413 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.146030 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/496cc50d-b7ea-4d73-a580-0922d170bc1e-trusted-ca-bundle\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.146071 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/496cc50d-b7ea-4d73-a580-0922d170bc1e-console-serving-cert\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.146091 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/496cc50d-b7ea-4d73-a580-0922d170bc1e-oauth-serving-cert\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.146125 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1258b368-7cb6-4e54-9def-f4e379f44f4d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-q5l6z\" (UID: \"1258b368-7cb6-4e54-9def-f4e379f44f4d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.146190 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/496cc50d-b7ea-4d73-a580-0922d170bc1e-service-ca\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.146206 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/496cc50d-b7ea-4d73-a580-0922d170bc1e-console-config\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.146232 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpfj8\" (UniqueName: \"kubernetes.io/projected/1258b368-7cb6-4e54-9def-f4e379f44f4d-kube-api-access-fpfj8\") pod \"nmstate-console-plugin-7754f76f8b-q5l6z\" (UID: \"1258b368-7cb6-4e54-9def-f4e379f44f4d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.146275 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xlbq\" (UniqueName: \"kubernetes.io/projected/496cc50d-b7ea-4d73-a580-0922d170bc1e-kube-api-access-6xlbq\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.146303 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/496cc50d-b7ea-4d73-a580-0922d170bc1e-console-oauth-config\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.146324 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1258b368-7cb6-4e54-9def-f4e379f44f4d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-q5l6z\" (UID: \"1258b368-7cb6-4e54-9def-f4e379f44f4d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z" Jan 27 07:05:58 crc kubenswrapper[4872]: E0127 07:05:58.146436 4872 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 27 07:05:58 crc kubenswrapper[4872]: E0127 07:05:58.146486 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1258b368-7cb6-4e54-9def-f4e379f44f4d-plugin-serving-cert podName:1258b368-7cb6-4e54-9def-f4e379f44f4d nodeName:}" failed. No retries permitted until 2026-01-27 07:05:58.646469246 +0000 UTC m=+735.173944442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/1258b368-7cb6-4e54-9def-f4e379f44f4d-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-q5l6z" (UID: "1258b368-7cb6-4e54-9def-f4e379f44f4d") : secret "plugin-serving-cert" not found Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.148601 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1258b368-7cb6-4e54-9def-f4e379f44f4d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-q5l6z\" (UID: \"1258b368-7cb6-4e54-9def-f4e379f44f4d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.161662 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bbcc9b596-wb45r"] Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.181913 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpfj8\" (UniqueName: \"kubernetes.io/projected/1258b368-7cb6-4e54-9def-f4e379f44f4d-kube-api-access-fpfj8\") pod \"nmstate-console-plugin-7754f76f8b-q5l6z\" (UID: \"1258b368-7cb6-4e54-9def-f4e379f44f4d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.246921 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xlbq\" (UniqueName: \"kubernetes.io/projected/496cc50d-b7ea-4d73-a580-0922d170bc1e-kube-api-access-6xlbq\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.246962 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/496cc50d-b7ea-4d73-a580-0922d170bc1e-console-oauth-config\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.247004 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/496cc50d-b7ea-4d73-a580-0922d170bc1e-trusted-ca-bundle\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.247025 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/496cc50d-b7ea-4d73-a580-0922d170bc1e-console-serving-cert\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.247042 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/496cc50d-b7ea-4d73-a580-0922d170bc1e-oauth-serving-cert\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.247068 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/496cc50d-b7ea-4d73-a580-0922d170bc1e-service-ca\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.247085 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/496cc50d-b7ea-4d73-a580-0922d170bc1e-console-config\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.248911 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/496cc50d-b7ea-4d73-a580-0922d170bc1e-trusted-ca-bundle\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.251155 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/496cc50d-b7ea-4d73-a580-0922d170bc1e-console-config\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.252598 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/496cc50d-b7ea-4d73-a580-0922d170bc1e-service-ca\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.259896 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/496cc50d-b7ea-4d73-a580-0922d170bc1e-oauth-serving-cert\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.260052 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/496cc50d-b7ea-4d73-a580-0922d170bc1e-console-serving-cert\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.261762 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/496cc50d-b7ea-4d73-a580-0922d170bc1e-console-oauth-config\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.272625 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xlbq\" (UniqueName: \"kubernetes.io/projected/496cc50d-b7ea-4d73-a580-0922d170bc1e-kube-api-access-6xlbq\") pod \"console-bbcc9b596-wb45r\" (UID: \"496cc50d-b7ea-4d73-a580-0922d170bc1e\") " pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.422069 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-f4ltq"] Jan 27 07:05:58 crc kubenswrapper[4872]: W0127 07:05:58.423796 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc5cf68b_3227_4b9d_aaf9_4562e622e0a0.slice/crio-63c87e3586514fd976c71871e988415cf383d68357ee16feefea27d0327493bb WatchSource:0}: Error finding container 63c87e3586514fd976c71871e988415cf383d68357ee16feefea27d0327493bb: Status 404 returned error can't find the container with id 63c87e3586514fd976c71871e988415cf383d68357ee16feefea27d0327493bb Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.463112 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.590303 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-lgvbb"] Jan 27 07:05:58 crc kubenswrapper[4872]: W0127 07:05:58.599721 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod630c7ad6_ffad_412c_8e81_c674d0a64558.slice/crio-7cc08cc02c94d641eb8fe819409c95a18a5a3fd4d11a0dac4944755f6969d3cd WatchSource:0}: Error finding container 7cc08cc02c94d641eb8fe819409c95a18a5a3fd4d11a0dac4944755f6969d3cd: Status 404 returned error can't find the container with id 7cc08cc02c94d641eb8fe819409c95a18a5a3fd4d11a0dac4944755f6969d3cd Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.646803 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f4ltq" event={"ID":"dc5cf68b-3227-4b9d-aaf9-4562e622e0a0","Type":"ContainerStarted","Data":"63c87e3586514fd976c71871e988415cf383d68357ee16feefea27d0327493bb"} Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.648574 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-lgvbb" event={"ID":"630c7ad6-ffad-412c-8e81-c674d0a64558","Type":"ContainerStarted","Data":"7cc08cc02c94d641eb8fe819409c95a18a5a3fd4d11a0dac4944755f6969d3cd"} Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.653642 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wcbvw" event={"ID":"a9bd09ec-0fa1-4d66-9df7-86950125ea55","Type":"ContainerStarted","Data":"3205747fb99616a0c0f0433fe5adc8d5156edc5e319eb0c5ee9a874195cada11"} Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.654506 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1258b368-7cb6-4e54-9def-f4e379f44f4d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-q5l6z\" (UID: \"1258b368-7cb6-4e54-9def-f4e379f44f4d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.659489 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1258b368-7cb6-4e54-9def-f4e379f44f4d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-q5l6z\" (UID: \"1258b368-7cb6-4e54-9def-f4e379f44f4d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z" Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.660514 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bbcc9b596-wb45r"] Jan 27 07:05:58 crc kubenswrapper[4872]: W0127 07:05:58.666462 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod496cc50d_b7ea_4d73_a580_0922d170bc1e.slice/crio-60d35cc434c32d72d563d8e04e05c6451b271d10688a02e28c5bf6d352a5475b WatchSource:0}: Error finding container 60d35cc434c32d72d563d8e04e05c6451b271d10688a02e28c5bf6d352a5475b: Status 404 returned error can't find the container with id 60d35cc434c32d72d563d8e04e05c6451b271d10688a02e28c5bf6d352a5475b Jan 27 07:05:58 crc kubenswrapper[4872]: I0127 07:05:58.839641 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z" Jan 27 07:05:59 crc kubenswrapper[4872]: I0127 07:05:59.244195 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z"] Jan 27 07:05:59 crc kubenswrapper[4872]: I0127 07:05:59.660009 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z" event={"ID":"1258b368-7cb6-4e54-9def-f4e379f44f4d","Type":"ContainerStarted","Data":"bc8ac352ce062e586f23b6ef3a190117dd045a57d65258b5c1f01040084dc6b9"} Jan 27 07:05:59 crc kubenswrapper[4872]: I0127 07:05:59.662347 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bbcc9b596-wb45r" event={"ID":"496cc50d-b7ea-4d73-a580-0922d170bc1e","Type":"ContainerStarted","Data":"57d1f7dfdadae5873af4041045fc1329a1dd5b769b444c34ea9090a34ca32e17"} Jan 27 07:05:59 crc kubenswrapper[4872]: I0127 07:05:59.662374 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bbcc9b596-wb45r" event={"ID":"496cc50d-b7ea-4d73-a580-0922d170bc1e","Type":"ContainerStarted","Data":"60d35cc434c32d72d563d8e04e05c6451b271d10688a02e28c5bf6d352a5475b"} Jan 27 07:05:59 crc kubenswrapper[4872]: I0127 07:05:59.684197 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-bbcc9b596-wb45r" podStartSLOduration=1.684179446 podStartE2EDuration="1.684179446s" podCreationTimestamp="2026-01-27 07:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:05:59.680723123 +0000 UTC m=+736.208198319" watchObservedRunningTime="2026-01-27 07:05:59.684179446 +0000 UTC m=+736.211654642" Jan 27 07:06:01 crc kubenswrapper[4872]: I0127 07:06:01.697892 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f4ltq" event={"ID":"dc5cf68b-3227-4b9d-aaf9-4562e622e0a0","Type":"ContainerStarted","Data":"ebdd487f3192eb45a388784ec5f6b91965cee95d0ec656a7c3bec15992dee632"} Jan 27 07:06:01 crc kubenswrapper[4872]: I0127 07:06:01.707090 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-lgvbb" event={"ID":"630c7ad6-ffad-412c-8e81-c674d0a64558","Type":"ContainerStarted","Data":"aa3698d1bc21c98ac6353599c838a62b10e47f6bc5a36b9a4f2294d4abc93c44"} Jan 27 07:06:01 crc kubenswrapper[4872]: I0127 07:06:01.708507 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wcbvw" event={"ID":"a9bd09ec-0fa1-4d66-9df7-86950125ea55","Type":"ContainerStarted","Data":"14e6b3d147b6881d995585447758003ff2da1e2fac999e7356dfa79aacc92979"} Jan 27 07:06:01 crc kubenswrapper[4872]: I0127 07:06:01.709159 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wcbvw" Jan 27 07:06:01 crc kubenswrapper[4872]: I0127 07:06:01.724947 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f4ltq" podStartSLOduration=2.235089015 podStartE2EDuration="4.724928321s" podCreationTimestamp="2026-01-27 07:05:57 +0000 UTC" firstStartedPulling="2026-01-27 07:05:58.42533569 +0000 UTC m=+734.952810886" lastFinishedPulling="2026-01-27 07:06:00.915174996 +0000 UTC m=+737.442650192" observedRunningTime="2026-01-27 07:06:01.71754818 +0000 UTC m=+738.245023376" watchObservedRunningTime="2026-01-27 07:06:01.724928321 +0000 UTC m=+738.252403517" Jan 27 07:06:02 crc kubenswrapper[4872]: I0127 07:06:02.714517 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z" event={"ID":"1258b368-7cb6-4e54-9def-f4e379f44f4d","Type":"ContainerStarted","Data":"2a4b342d4a011d5c906b6890bdd62593e3ccd421af07a8058e5e22e7591add7b"} Jan 27 07:06:02 crc kubenswrapper[4872]: I0127 07:06:02.714874 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f4ltq" Jan 27 07:06:02 crc kubenswrapper[4872]: I0127 07:06:02.734427 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q5l6z" podStartSLOduration=2.626217004 podStartE2EDuration="5.734404582s" podCreationTimestamp="2026-01-27 07:05:57 +0000 UTC" firstStartedPulling="2026-01-27 07:05:59.25594442 +0000 UTC m=+735.783419636" lastFinishedPulling="2026-01-27 07:06:02.364132028 +0000 UTC m=+738.891607214" observedRunningTime="2026-01-27 07:06:02.726455916 +0000 UTC m=+739.253931132" watchObservedRunningTime="2026-01-27 07:06:02.734404582 +0000 UTC m=+739.261879768" Jan 27 07:06:02 crc kubenswrapper[4872]: I0127 07:06:02.736705 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wcbvw" podStartSLOduration=3.051004105 podStartE2EDuration="5.736681273s" podCreationTimestamp="2026-01-27 07:05:57 +0000 UTC" firstStartedPulling="2026-01-27 07:05:58.199227077 +0000 UTC m=+734.726702273" lastFinishedPulling="2026-01-27 07:06:00.884904245 +0000 UTC m=+737.412379441" observedRunningTime="2026-01-27 07:06:01.735886817 +0000 UTC m=+738.263362033" watchObservedRunningTime="2026-01-27 07:06:02.736681273 +0000 UTC m=+739.264156469" Jan 27 07:06:03 crc kubenswrapper[4872]: I0127 07:06:03.722174 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-lgvbb" event={"ID":"630c7ad6-ffad-412c-8e81-c674d0a64558","Type":"ContainerStarted","Data":"51ea0879e9df4b3232ebceff9617d397272ff574ce8244b21b90682b4b673ce5"} Jan 27 07:06:03 crc kubenswrapper[4872]: I0127 07:06:03.739174 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-lgvbb" podStartSLOduration=1.957485205 podStartE2EDuration="6.739158755s" podCreationTimestamp="2026-01-27 07:05:57 +0000 UTC" firstStartedPulling="2026-01-27 07:05:58.602899957 +0000 UTC m=+735.130375143" lastFinishedPulling="2026-01-27 07:06:03.384573497 +0000 UTC m=+739.912048693" observedRunningTime="2026-01-27 07:06:03.736300868 +0000 UTC m=+740.263776074" watchObservedRunningTime="2026-01-27 07:06:03.739158755 +0000 UTC m=+740.266633951" Jan 27 07:06:08 crc kubenswrapper[4872]: I0127 07:06:08.151928 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wcbvw" Jan 27 07:06:08 crc kubenswrapper[4872]: I0127 07:06:08.464839 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:06:08 crc kubenswrapper[4872]: I0127 07:06:08.464939 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:06:08 crc kubenswrapper[4872]: I0127 07:06:08.470346 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:06:08 crc kubenswrapper[4872]: I0127 07:06:08.753207 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bbcc9b596-wb45r" Jan 27 07:06:08 crc kubenswrapper[4872]: I0127 07:06:08.815724 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ntnst"] Jan 27 07:06:17 crc kubenswrapper[4872]: I0127 07:06:17.878636 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zltdc"] Jan 27 07:06:17 crc kubenswrapper[4872]: I0127 07:06:17.884234 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:17 crc kubenswrapper[4872]: I0127 07:06:17.890296 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zltdc"] Jan 27 07:06:18 crc kubenswrapper[4872]: I0127 07:06:18.023198 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/586cf851-70f7-443c-b234-063bd1de3b1c-utilities\") pod \"community-operators-zltdc\" (UID: \"586cf851-70f7-443c-b234-063bd1de3b1c\") " pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:18 crc kubenswrapper[4872]: I0127 07:06:18.023256 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-697rk\" (UniqueName: \"kubernetes.io/projected/586cf851-70f7-443c-b234-063bd1de3b1c-kube-api-access-697rk\") pod \"community-operators-zltdc\" (UID: \"586cf851-70f7-443c-b234-063bd1de3b1c\") " pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:18 crc kubenswrapper[4872]: I0127 07:06:18.023292 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/586cf851-70f7-443c-b234-063bd1de3b1c-catalog-content\") pod \"community-operators-zltdc\" (UID: \"586cf851-70f7-443c-b234-063bd1de3b1c\") " pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:18 crc kubenswrapper[4872]: I0127 07:06:18.087962 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f4ltq" Jan 27 07:06:18 crc kubenswrapper[4872]: I0127 07:06:18.124389 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/586cf851-70f7-443c-b234-063bd1de3b1c-utilities\") pod \"community-operators-zltdc\" (UID: \"586cf851-70f7-443c-b234-063bd1de3b1c\") " pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:18 crc kubenswrapper[4872]: I0127 07:06:18.124728 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-697rk\" (UniqueName: \"kubernetes.io/projected/586cf851-70f7-443c-b234-063bd1de3b1c-kube-api-access-697rk\") pod \"community-operators-zltdc\" (UID: \"586cf851-70f7-443c-b234-063bd1de3b1c\") " pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:18 crc kubenswrapper[4872]: I0127 07:06:18.124770 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/586cf851-70f7-443c-b234-063bd1de3b1c-catalog-content\") pod \"community-operators-zltdc\" (UID: \"586cf851-70f7-443c-b234-063bd1de3b1c\") " pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:18 crc kubenswrapper[4872]: I0127 07:06:18.125110 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/586cf851-70f7-443c-b234-063bd1de3b1c-utilities\") pod \"community-operators-zltdc\" (UID: \"586cf851-70f7-443c-b234-063bd1de3b1c\") " pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:18 crc kubenswrapper[4872]: I0127 07:06:18.125160 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/586cf851-70f7-443c-b234-063bd1de3b1c-catalog-content\") pod \"community-operators-zltdc\" (UID: \"586cf851-70f7-443c-b234-063bd1de3b1c\") " pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:18 crc kubenswrapper[4872]: I0127 07:06:18.151743 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-697rk\" (UniqueName: \"kubernetes.io/projected/586cf851-70f7-443c-b234-063bd1de3b1c-kube-api-access-697rk\") pod \"community-operators-zltdc\" (UID: \"586cf851-70f7-443c-b234-063bd1de3b1c\") " pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:18 crc kubenswrapper[4872]: I0127 07:06:18.204947 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:18 crc kubenswrapper[4872]: I0127 07:06:18.547317 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zltdc"] Jan 27 07:06:18 crc kubenswrapper[4872]: I0127 07:06:18.808648 4872 generic.go:334] "Generic (PLEG): container finished" podID="586cf851-70f7-443c-b234-063bd1de3b1c" containerID="d38728c23c803464fb78fe9ab91330067ad91d8103bac5d71057d03adeb4d3a7" exitCode=0 Jan 27 07:06:18 crc kubenswrapper[4872]: I0127 07:06:18.808680 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zltdc" event={"ID":"586cf851-70f7-443c-b234-063bd1de3b1c","Type":"ContainerDied","Data":"d38728c23c803464fb78fe9ab91330067ad91d8103bac5d71057d03adeb4d3a7"} Jan 27 07:06:18 crc kubenswrapper[4872]: I0127 07:06:18.808705 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zltdc" event={"ID":"586cf851-70f7-443c-b234-063bd1de3b1c","Type":"ContainerStarted","Data":"757fac151955bd7ef6a0ca4ceea2df4f0feaed9abcdb9b860fc3ad046e032964"} Jan 27 07:06:19 crc kubenswrapper[4872]: I0127 07:06:19.815405 4872 generic.go:334] "Generic (PLEG): container finished" podID="586cf851-70f7-443c-b234-063bd1de3b1c" containerID="202c48b03fe4b1b93a2351f548724735b989b0ba06a83f80fe7fe3359bafaf7e" exitCode=0 Jan 27 07:06:19 crc kubenswrapper[4872]: I0127 07:06:19.815559 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zltdc" event={"ID":"586cf851-70f7-443c-b234-063bd1de3b1c","Type":"ContainerDied","Data":"202c48b03fe4b1b93a2351f548724735b989b0ba06a83f80fe7fe3359bafaf7e"} Jan 27 07:06:20 crc kubenswrapper[4872]: I0127 07:06:20.824304 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zltdc" event={"ID":"586cf851-70f7-443c-b234-063bd1de3b1c","Type":"ContainerStarted","Data":"e1db08a4f52bb4e2e43e54dcc3f4b070b4a6a5713e95a25b40df1820fb5bc83c"} Jan 27 07:06:20 crc kubenswrapper[4872]: I0127 07:06:20.842253 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zltdc" podStartSLOduration=2.476668135 podStartE2EDuration="3.842237016s" podCreationTimestamp="2026-01-27 07:06:17 +0000 UTC" firstStartedPulling="2026-01-27 07:06:18.810561098 +0000 UTC m=+755.338036304" lastFinishedPulling="2026-01-27 07:06:20.176129989 +0000 UTC m=+756.703605185" observedRunningTime="2026-01-27 07:06:20.841448245 +0000 UTC m=+757.368923491" watchObservedRunningTime="2026-01-27 07:06:20.842237016 +0000 UTC m=+757.369712212" Jan 27 07:06:21 crc kubenswrapper[4872]: I0127 07:06:21.043709 4872 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.001171 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.001645 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.001685 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.002266 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"462f190318026a23f51e6663c8fa4063ee29388a381bdeb37568ca3834d2b1f2"} pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.002309 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" containerID="cri-o://462f190318026a23f51e6663c8fa4063ee29388a381bdeb37568ca3834d2b1f2" gracePeriod=600 Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.709426 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g77cj"] Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.710862 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.725528 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g77cj"] Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.748973 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dlzd\" (UniqueName: \"kubernetes.io/projected/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-kube-api-access-5dlzd\") pod \"redhat-operators-g77cj\" (UID: \"74a26b0d-dfd4-44f9-ada4-2335c75b57b5\") " pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.749065 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-catalog-content\") pod \"redhat-operators-g77cj\" (UID: \"74a26b0d-dfd4-44f9-ada4-2335c75b57b5\") " pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.749116 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-utilities\") pod \"redhat-operators-g77cj\" (UID: \"74a26b0d-dfd4-44f9-ada4-2335c75b57b5\") " pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.850030 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-utilities\") pod \"redhat-operators-g77cj\" (UID: \"74a26b0d-dfd4-44f9-ada4-2335c75b57b5\") " pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.850084 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dlzd\" (UniqueName: \"kubernetes.io/projected/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-kube-api-access-5dlzd\") pod \"redhat-operators-g77cj\" (UID: \"74a26b0d-dfd4-44f9-ada4-2335c75b57b5\") " pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.850136 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-catalog-content\") pod \"redhat-operators-g77cj\" (UID: \"74a26b0d-dfd4-44f9-ada4-2335c75b57b5\") " pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.851084 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-catalog-content\") pod \"redhat-operators-g77cj\" (UID: \"74a26b0d-dfd4-44f9-ada4-2335c75b57b5\") " pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.851151 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-utilities\") pod \"redhat-operators-g77cj\" (UID: \"74a26b0d-dfd4-44f9-ada4-2335c75b57b5\") " pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.853937 4872 generic.go:334] "Generic (PLEG): container finished" podID="5ea42312-a362-48cd-8387-34c060df18a1" containerID="462f190318026a23f51e6663c8fa4063ee29388a381bdeb37568ca3834d2b1f2" exitCode=0 Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.853981 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerDied","Data":"462f190318026a23f51e6663c8fa4063ee29388a381bdeb37568ca3834d2b1f2"} Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.854006 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerStarted","Data":"20143d97a66f83fcd247e09e3531e769d9734a19dc520edc93aa6f0111b588b3"} Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.854024 4872 scope.go:117] "RemoveContainer" containerID="0caf69cb9e39f3837852069683cde633e818b7d540e32a37fd7b20bbc8a7756d" Jan 27 07:06:25 crc kubenswrapper[4872]: I0127 07:06:25.875760 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dlzd\" (UniqueName: \"kubernetes.io/projected/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-kube-api-access-5dlzd\") pod \"redhat-operators-g77cj\" (UID: \"74a26b0d-dfd4-44f9-ada4-2335c75b57b5\") " pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:26 crc kubenswrapper[4872]: I0127 07:06:26.034116 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:26 crc kubenswrapper[4872]: I0127 07:06:26.347854 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g77cj"] Jan 27 07:06:26 crc kubenswrapper[4872]: W0127 07:06:26.354374 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74a26b0d_dfd4_44f9_ada4_2335c75b57b5.slice/crio-b5f2e634566a5457e17f45b618ee8cef394a1de18ab5ebab94ec4f6168fc0754 WatchSource:0}: Error finding container b5f2e634566a5457e17f45b618ee8cef394a1de18ab5ebab94ec4f6168fc0754: Status 404 returned error can't find the container with id b5f2e634566a5457e17f45b618ee8cef394a1de18ab5ebab94ec4f6168fc0754 Jan 27 07:06:26 crc kubenswrapper[4872]: I0127 07:06:26.864928 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g77cj" event={"ID":"74a26b0d-dfd4-44f9-ada4-2335c75b57b5","Type":"ContainerDied","Data":"c96f952af6d8f76a0844ec9ce68c95037dd453101e35ab8878af9517f0dbc749"} Jan 27 07:06:26 crc kubenswrapper[4872]: I0127 07:06:26.866507 4872 generic.go:334] "Generic (PLEG): container finished" podID="74a26b0d-dfd4-44f9-ada4-2335c75b57b5" containerID="c96f952af6d8f76a0844ec9ce68c95037dd453101e35ab8878af9517f0dbc749" exitCode=0 Jan 27 07:06:26 crc kubenswrapper[4872]: I0127 07:06:26.868945 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g77cj" event={"ID":"74a26b0d-dfd4-44f9-ada4-2335c75b57b5","Type":"ContainerStarted","Data":"b5f2e634566a5457e17f45b618ee8cef394a1de18ab5ebab94ec4f6168fc0754"} Jan 27 07:06:28 crc kubenswrapper[4872]: I0127 07:06:28.205608 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:28 crc kubenswrapper[4872]: I0127 07:06:28.205916 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:28 crc kubenswrapper[4872]: I0127 07:06:28.247676 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:28 crc kubenswrapper[4872]: I0127 07:06:28.922301 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:29 crc kubenswrapper[4872]: I0127 07:06:29.888124 4872 generic.go:334] "Generic (PLEG): container finished" podID="74a26b0d-dfd4-44f9-ada4-2335c75b57b5" containerID="56d007baebb010f9a0c0f8b92b4fce98540f6cc0f049b71254781bef78ddfef4" exitCode=0 Jan 27 07:06:29 crc kubenswrapper[4872]: I0127 07:06:29.888176 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g77cj" event={"ID":"74a26b0d-dfd4-44f9-ada4-2335c75b57b5","Type":"ContainerDied","Data":"56d007baebb010f9a0c0f8b92b4fce98540f6cc0f049b71254781bef78ddfef4"} Jan 27 07:06:30 crc kubenswrapper[4872]: I0127 07:06:30.083582 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zltdc"] Jan 27 07:06:30 crc kubenswrapper[4872]: I0127 07:06:30.896463 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g77cj" event={"ID":"74a26b0d-dfd4-44f9-ada4-2335c75b57b5","Type":"ContainerStarted","Data":"b4261331a3a71ee03d39afe75a8ee366ede748ade3f1ed0bc906b1f55ebc85f1"} Jan 27 07:06:30 crc kubenswrapper[4872]: I0127 07:06:30.896616 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zltdc" podUID="586cf851-70f7-443c-b234-063bd1de3b1c" containerName="registry-server" containerID="cri-o://e1db08a4f52bb4e2e43e54dcc3f4b070b4a6a5713e95a25b40df1820fb5bc83c" gracePeriod=2 Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.719619 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.733890 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g77cj" podStartSLOduration=3.322817983 podStartE2EDuration="6.733872186s" podCreationTimestamp="2026-01-27 07:06:25 +0000 UTC" firstStartedPulling="2026-01-27 07:06:26.873529522 +0000 UTC m=+763.401004708" lastFinishedPulling="2026-01-27 07:06:30.284583715 +0000 UTC m=+766.812058911" observedRunningTime="2026-01-27 07:06:30.918291304 +0000 UTC m=+767.445766510" watchObservedRunningTime="2026-01-27 07:06:31.733872186 +0000 UTC m=+768.261347392" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.756504 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-697rk\" (UniqueName: \"kubernetes.io/projected/586cf851-70f7-443c-b234-063bd1de3b1c-kube-api-access-697rk\") pod \"586cf851-70f7-443c-b234-063bd1de3b1c\" (UID: \"586cf851-70f7-443c-b234-063bd1de3b1c\") " Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.756642 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/586cf851-70f7-443c-b234-063bd1de3b1c-catalog-content\") pod \"586cf851-70f7-443c-b234-063bd1de3b1c\" (UID: \"586cf851-70f7-443c-b234-063bd1de3b1c\") " Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.756728 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/586cf851-70f7-443c-b234-063bd1de3b1c-utilities\") pod \"586cf851-70f7-443c-b234-063bd1de3b1c\" (UID: \"586cf851-70f7-443c-b234-063bd1de3b1c\") " Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.757561 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/586cf851-70f7-443c-b234-063bd1de3b1c-utilities" (OuterVolumeSpecName: "utilities") pod "586cf851-70f7-443c-b234-063bd1de3b1c" (UID: "586cf851-70f7-443c-b234-063bd1de3b1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.766126 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586cf851-70f7-443c-b234-063bd1de3b1c-kube-api-access-697rk" (OuterVolumeSpecName: "kube-api-access-697rk") pod "586cf851-70f7-443c-b234-063bd1de3b1c" (UID: "586cf851-70f7-443c-b234-063bd1de3b1c"). InnerVolumeSpecName "kube-api-access-697rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.803502 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/586cf851-70f7-443c-b234-063bd1de3b1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "586cf851-70f7-443c-b234-063bd1de3b1c" (UID: "586cf851-70f7-443c-b234-063bd1de3b1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.858361 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/586cf851-70f7-443c-b234-063bd1de3b1c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.858435 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/586cf851-70f7-443c-b234-063bd1de3b1c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.858448 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-697rk\" (UniqueName: \"kubernetes.io/projected/586cf851-70f7-443c-b234-063bd1de3b1c-kube-api-access-697rk\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.903440 4872 generic.go:334] "Generic (PLEG): container finished" podID="586cf851-70f7-443c-b234-063bd1de3b1c" containerID="e1db08a4f52bb4e2e43e54dcc3f4b070b4a6a5713e95a25b40df1820fb5bc83c" exitCode=0 Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.903507 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zltdc" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.903526 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zltdc" event={"ID":"586cf851-70f7-443c-b234-063bd1de3b1c","Type":"ContainerDied","Data":"e1db08a4f52bb4e2e43e54dcc3f4b070b4a6a5713e95a25b40df1820fb5bc83c"} Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.903574 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zltdc" event={"ID":"586cf851-70f7-443c-b234-063bd1de3b1c","Type":"ContainerDied","Data":"757fac151955bd7ef6a0ca4ceea2df4f0feaed9abcdb9b860fc3ad046e032964"} Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.903594 4872 scope.go:117] "RemoveContainer" containerID="e1db08a4f52bb4e2e43e54dcc3f4b070b4a6a5713e95a25b40df1820fb5bc83c" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.934064 4872 scope.go:117] "RemoveContainer" containerID="202c48b03fe4b1b93a2351f548724735b989b0ba06a83f80fe7fe3359bafaf7e" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.937712 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9"] Jan 27 07:06:31 crc kubenswrapper[4872]: E0127 07:06:31.938014 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586cf851-70f7-443c-b234-063bd1de3b1c" containerName="registry-server" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.938029 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="586cf851-70f7-443c-b234-063bd1de3b1c" containerName="registry-server" Jan 27 07:06:31 crc kubenswrapper[4872]: E0127 07:06:31.938042 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586cf851-70f7-443c-b234-063bd1de3b1c" containerName="extract-content" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.938050 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="586cf851-70f7-443c-b234-063bd1de3b1c" containerName="extract-content" Jan 27 07:06:31 crc kubenswrapper[4872]: E0127 07:06:31.938061 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586cf851-70f7-443c-b234-063bd1de3b1c" containerName="extract-utilities" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.938069 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="586cf851-70f7-443c-b234-063bd1de3b1c" containerName="extract-utilities" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.938185 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="586cf851-70f7-443c-b234-063bd1de3b1c" containerName="registry-server" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.939155 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.940702 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.953241 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zltdc"] Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.955471 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9"] Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.960031 4872 scope.go:117] "RemoveContainer" containerID="d38728c23c803464fb78fe9ab91330067ad91d8103bac5d71057d03adeb4d3a7" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.962063 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zltdc"] Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.980282 4872 scope.go:117] "RemoveContainer" containerID="e1db08a4f52bb4e2e43e54dcc3f4b070b4a6a5713e95a25b40df1820fb5bc83c" Jan 27 07:06:31 crc kubenswrapper[4872]: E0127 07:06:31.980712 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1db08a4f52bb4e2e43e54dcc3f4b070b4a6a5713e95a25b40df1820fb5bc83c\": container with ID starting with e1db08a4f52bb4e2e43e54dcc3f4b070b4a6a5713e95a25b40df1820fb5bc83c not found: ID does not exist" containerID="e1db08a4f52bb4e2e43e54dcc3f4b070b4a6a5713e95a25b40df1820fb5bc83c" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.980751 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1db08a4f52bb4e2e43e54dcc3f4b070b4a6a5713e95a25b40df1820fb5bc83c"} err="failed to get container status \"e1db08a4f52bb4e2e43e54dcc3f4b070b4a6a5713e95a25b40df1820fb5bc83c\": rpc error: code = NotFound desc = could not find container \"e1db08a4f52bb4e2e43e54dcc3f4b070b4a6a5713e95a25b40df1820fb5bc83c\": container with ID starting with e1db08a4f52bb4e2e43e54dcc3f4b070b4a6a5713e95a25b40df1820fb5bc83c not found: ID does not exist" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.980780 4872 scope.go:117] "RemoveContainer" containerID="202c48b03fe4b1b93a2351f548724735b989b0ba06a83f80fe7fe3359bafaf7e" Jan 27 07:06:31 crc kubenswrapper[4872]: E0127 07:06:31.981337 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"202c48b03fe4b1b93a2351f548724735b989b0ba06a83f80fe7fe3359bafaf7e\": container with ID starting with 202c48b03fe4b1b93a2351f548724735b989b0ba06a83f80fe7fe3359bafaf7e not found: ID does not exist" containerID="202c48b03fe4b1b93a2351f548724735b989b0ba06a83f80fe7fe3359bafaf7e" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.981379 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"202c48b03fe4b1b93a2351f548724735b989b0ba06a83f80fe7fe3359bafaf7e"} err="failed to get container status \"202c48b03fe4b1b93a2351f548724735b989b0ba06a83f80fe7fe3359bafaf7e\": rpc error: code = NotFound desc = could not find container \"202c48b03fe4b1b93a2351f548724735b989b0ba06a83f80fe7fe3359bafaf7e\": container with ID starting with 202c48b03fe4b1b93a2351f548724735b989b0ba06a83f80fe7fe3359bafaf7e not found: ID does not exist" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.981406 4872 scope.go:117] "RemoveContainer" containerID="d38728c23c803464fb78fe9ab91330067ad91d8103bac5d71057d03adeb4d3a7" Jan 27 07:06:31 crc kubenswrapper[4872]: E0127 07:06:31.981690 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d38728c23c803464fb78fe9ab91330067ad91d8103bac5d71057d03adeb4d3a7\": container with ID starting with d38728c23c803464fb78fe9ab91330067ad91d8103bac5d71057d03adeb4d3a7 not found: ID does not exist" containerID="d38728c23c803464fb78fe9ab91330067ad91d8103bac5d71057d03adeb4d3a7" Jan 27 07:06:31 crc kubenswrapper[4872]: I0127 07:06:31.981718 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d38728c23c803464fb78fe9ab91330067ad91d8103bac5d71057d03adeb4d3a7"} err="failed to get container status \"d38728c23c803464fb78fe9ab91330067ad91d8103bac5d71057d03adeb4d3a7\": rpc error: code = NotFound desc = could not find container \"d38728c23c803464fb78fe9ab91330067ad91d8103bac5d71057d03adeb4d3a7\": container with ID starting with d38728c23c803464fb78fe9ab91330067ad91d8103bac5d71057d03adeb4d3a7 not found: ID does not exist" Jan 27 07:06:32 crc kubenswrapper[4872]: I0127 07:06:32.060544 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xxhv\" (UniqueName: \"kubernetes.io/projected/d7c87876-5377-48d5-9648-b761334f75c7-kube-api-access-4xxhv\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9\" (UID: \"d7c87876-5377-48d5-9648-b761334f75c7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" Jan 27 07:06:32 crc kubenswrapper[4872]: I0127 07:06:32.060632 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7c87876-5377-48d5-9648-b761334f75c7-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9\" (UID: \"d7c87876-5377-48d5-9648-b761334f75c7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" Jan 27 07:06:32 crc kubenswrapper[4872]: I0127 07:06:32.060663 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7c87876-5377-48d5-9648-b761334f75c7-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9\" (UID: \"d7c87876-5377-48d5-9648-b761334f75c7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" Jan 27 07:06:32 crc kubenswrapper[4872]: I0127 07:06:32.105115 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="586cf851-70f7-443c-b234-063bd1de3b1c" path="/var/lib/kubelet/pods/586cf851-70f7-443c-b234-063bd1de3b1c/volumes" Jan 27 07:06:32 crc kubenswrapper[4872]: I0127 07:06:32.162126 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xxhv\" (UniqueName: \"kubernetes.io/projected/d7c87876-5377-48d5-9648-b761334f75c7-kube-api-access-4xxhv\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9\" (UID: \"d7c87876-5377-48d5-9648-b761334f75c7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" Jan 27 07:06:32 crc kubenswrapper[4872]: I0127 07:06:32.162224 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7c87876-5377-48d5-9648-b761334f75c7-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9\" (UID: \"d7c87876-5377-48d5-9648-b761334f75c7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" Jan 27 07:06:32 crc kubenswrapper[4872]: I0127 07:06:32.162273 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7c87876-5377-48d5-9648-b761334f75c7-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9\" (UID: \"d7c87876-5377-48d5-9648-b761334f75c7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" Jan 27 07:06:32 crc kubenswrapper[4872]: I0127 07:06:32.163072 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7c87876-5377-48d5-9648-b761334f75c7-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9\" (UID: \"d7c87876-5377-48d5-9648-b761334f75c7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" Jan 27 07:06:32 crc kubenswrapper[4872]: I0127 07:06:32.163352 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7c87876-5377-48d5-9648-b761334f75c7-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9\" (UID: \"d7c87876-5377-48d5-9648-b761334f75c7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" Jan 27 07:06:32 crc kubenswrapper[4872]: I0127 07:06:32.182087 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xxhv\" (UniqueName: \"kubernetes.io/projected/d7c87876-5377-48d5-9648-b761334f75c7-kube-api-access-4xxhv\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9\" (UID: \"d7c87876-5377-48d5-9648-b761334f75c7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" Jan 27 07:06:32 crc kubenswrapper[4872]: I0127 07:06:32.277097 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" Jan 27 07:06:32 crc kubenswrapper[4872]: I0127 07:06:32.486869 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9"] Jan 27 07:06:32 crc kubenswrapper[4872]: W0127 07:06:32.489891 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7c87876_5377_48d5_9648_b761334f75c7.slice/crio-3df552eef53b89321b69b2ac341a5d5b078ac6bac8516e4f69d6017fff74d0bb WatchSource:0}: Error finding container 3df552eef53b89321b69b2ac341a5d5b078ac6bac8516e4f69d6017fff74d0bb: Status 404 returned error can't find the container with id 3df552eef53b89321b69b2ac341a5d5b078ac6bac8516e4f69d6017fff74d0bb Jan 27 07:06:32 crc kubenswrapper[4872]: E0127 07:06:32.739546 4872 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7c87876_5377_48d5_9648_b761334f75c7.slice/crio-6e9605b0c63f70c0d64bd77e013241864c1390c849491c14e64c15c23b9ea10c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7c87876_5377_48d5_9648_b761334f75c7.slice/crio-conmon-6e9605b0c63f70c0d64bd77e013241864c1390c849491c14e64c15c23b9ea10c.scope\": RecentStats: unable to find data in memory cache]" Jan 27 07:06:32 crc kubenswrapper[4872]: I0127 07:06:32.921331 4872 generic.go:334] "Generic (PLEG): container finished" podID="d7c87876-5377-48d5-9648-b761334f75c7" containerID="6e9605b0c63f70c0d64bd77e013241864c1390c849491c14e64c15c23b9ea10c" exitCode=0 Jan 27 07:06:32 crc kubenswrapper[4872]: I0127 07:06:32.921410 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" event={"ID":"d7c87876-5377-48d5-9648-b761334f75c7","Type":"ContainerDied","Data":"6e9605b0c63f70c0d64bd77e013241864c1390c849491c14e64c15c23b9ea10c"} Jan 27 07:06:32 crc kubenswrapper[4872]: I0127 07:06:32.921437 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" event={"ID":"d7c87876-5377-48d5-9648-b761334f75c7","Type":"ContainerStarted","Data":"3df552eef53b89321b69b2ac341a5d5b078ac6bac8516e4f69d6017fff74d0bb"} Jan 27 07:06:33 crc kubenswrapper[4872]: I0127 07:06:33.871768 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-ntnst" podUID="8cfa7f72-9e39-485f-894a-276893a688e1" containerName="console" containerID="cri-o://d3238bd791686210394dc3e0677ec97e87d3412ca420fa7cafed9dbb45fca393" gracePeriod=15 Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.224042 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ntnst_8cfa7f72-9e39-485f-894a-276893a688e1/console/0.log" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.224111 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ntnst" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.296687 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-oauth-serving-cert\") pod \"8cfa7f72-9e39-485f-894a-276893a688e1\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.296875 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-trusted-ca-bundle\") pod \"8cfa7f72-9e39-485f-894a-276893a688e1\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.297605 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8cfa7f72-9e39-485f-894a-276893a688e1" (UID: "8cfa7f72-9e39-485f-894a-276893a688e1"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.297681 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8cfa7f72-9e39-485f-894a-276893a688e1" (UID: "8cfa7f72-9e39-485f-894a-276893a688e1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.297813 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-console-config\") pod \"8cfa7f72-9e39-485f-894a-276893a688e1\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.298202 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-console-config" (OuterVolumeSpecName: "console-config") pod "8cfa7f72-9e39-485f-894a-276893a688e1" (UID: "8cfa7f72-9e39-485f-894a-276893a688e1"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.298328 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-service-ca\") pod \"8cfa7f72-9e39-485f-894a-276893a688e1\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.298416 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8rlb\" (UniqueName: \"kubernetes.io/projected/8cfa7f72-9e39-485f-894a-276893a688e1-kube-api-access-x8rlb\") pod \"8cfa7f72-9e39-485f-894a-276893a688e1\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.298481 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8cfa7f72-9e39-485f-894a-276893a688e1-console-oauth-config\") pod \"8cfa7f72-9e39-485f-894a-276893a688e1\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.298505 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8cfa7f72-9e39-485f-894a-276893a688e1-console-serving-cert\") pod \"8cfa7f72-9e39-485f-894a-276893a688e1\" (UID: \"8cfa7f72-9e39-485f-894a-276893a688e1\") " Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.298662 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-service-ca" (OuterVolumeSpecName: "service-ca") pod "8cfa7f72-9e39-485f-894a-276893a688e1" (UID: "8cfa7f72-9e39-485f-894a-276893a688e1"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.299334 4872 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.299368 4872 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.299385 4872 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.299398 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cfa7f72-9e39-485f-894a-276893a688e1-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.316045 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cfa7f72-9e39-485f-894a-276893a688e1-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8cfa7f72-9e39-485f-894a-276893a688e1" (UID: "8cfa7f72-9e39-485f-894a-276893a688e1"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.317267 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cfa7f72-9e39-485f-894a-276893a688e1-kube-api-access-x8rlb" (OuterVolumeSpecName: "kube-api-access-x8rlb") pod "8cfa7f72-9e39-485f-894a-276893a688e1" (UID: "8cfa7f72-9e39-485f-894a-276893a688e1"). InnerVolumeSpecName "kube-api-access-x8rlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.318052 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cfa7f72-9e39-485f-894a-276893a688e1-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8cfa7f72-9e39-485f-894a-276893a688e1" (UID: "8cfa7f72-9e39-485f-894a-276893a688e1"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.400752 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8rlb\" (UniqueName: \"kubernetes.io/projected/8cfa7f72-9e39-485f-894a-276893a688e1-kube-api-access-x8rlb\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.401088 4872 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8cfa7f72-9e39-485f-894a-276893a688e1-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.401184 4872 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8cfa7f72-9e39-485f-894a-276893a688e1-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.936389 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" event={"ID":"d7c87876-5377-48d5-9648-b761334f75c7","Type":"ContainerStarted","Data":"b5cc1190828a7f0c9b85dd2399a36ae4df1718dfe8444df92c27d853ccf96b2e"} Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.938742 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ntnst_8cfa7f72-9e39-485f-894a-276893a688e1/console/0.log" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.938771 4872 generic.go:334] "Generic (PLEG): container finished" podID="8cfa7f72-9e39-485f-894a-276893a688e1" containerID="d3238bd791686210394dc3e0677ec97e87d3412ca420fa7cafed9dbb45fca393" exitCode=2 Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.938791 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ntnst" event={"ID":"8cfa7f72-9e39-485f-894a-276893a688e1","Type":"ContainerDied","Data":"d3238bd791686210394dc3e0677ec97e87d3412ca420fa7cafed9dbb45fca393"} Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.938807 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ntnst" event={"ID":"8cfa7f72-9e39-485f-894a-276893a688e1","Type":"ContainerDied","Data":"00275fe108eab5dab172a2c1e7046f5bc2fa279d90bd66cb6d904cc059380e96"} Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.938826 4872 scope.go:117] "RemoveContainer" containerID="d3238bd791686210394dc3e0677ec97e87d3412ca420fa7cafed9dbb45fca393" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.938927 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ntnst" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.959989 4872 scope.go:117] "RemoveContainer" containerID="d3238bd791686210394dc3e0677ec97e87d3412ca420fa7cafed9dbb45fca393" Jan 27 07:06:34 crc kubenswrapper[4872]: E0127 07:06:34.960606 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3238bd791686210394dc3e0677ec97e87d3412ca420fa7cafed9dbb45fca393\": container with ID starting with d3238bd791686210394dc3e0677ec97e87d3412ca420fa7cafed9dbb45fca393 not found: ID does not exist" containerID="d3238bd791686210394dc3e0677ec97e87d3412ca420fa7cafed9dbb45fca393" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.960730 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3238bd791686210394dc3e0677ec97e87d3412ca420fa7cafed9dbb45fca393"} err="failed to get container status \"d3238bd791686210394dc3e0677ec97e87d3412ca420fa7cafed9dbb45fca393\": rpc error: code = NotFound desc = could not find container \"d3238bd791686210394dc3e0677ec97e87d3412ca420fa7cafed9dbb45fca393\": container with ID starting with d3238bd791686210394dc3e0677ec97e87d3412ca420fa7cafed9dbb45fca393 not found: ID does not exist" Jan 27 07:06:34 crc kubenswrapper[4872]: I0127 07:06:34.977173 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ntnst"] Jan 27 07:06:35 crc kubenswrapper[4872]: I0127 07:06:35.001915 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-ntnst"] Jan 27 07:06:36 crc kubenswrapper[4872]: I0127 07:06:36.034467 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:36 crc kubenswrapper[4872]: I0127 07:06:36.034740 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:36 crc kubenswrapper[4872]: I0127 07:06:36.074265 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:36 crc kubenswrapper[4872]: I0127 07:06:36.105269 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cfa7f72-9e39-485f-894a-276893a688e1" path="/var/lib/kubelet/pods/8cfa7f72-9e39-485f-894a-276893a688e1/volumes" Jan 27 07:06:36 crc kubenswrapper[4872]: I0127 07:06:36.956808 4872 generic.go:334] "Generic (PLEG): container finished" podID="d7c87876-5377-48d5-9648-b761334f75c7" containerID="b5cc1190828a7f0c9b85dd2399a36ae4df1718dfe8444df92c27d853ccf96b2e" exitCode=0 Jan 27 07:06:36 crc kubenswrapper[4872]: I0127 07:06:36.956929 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" event={"ID":"d7c87876-5377-48d5-9648-b761334f75c7","Type":"ContainerDied","Data":"b5cc1190828a7f0c9b85dd2399a36ae4df1718dfe8444df92c27d853ccf96b2e"} Jan 27 07:06:37 crc kubenswrapper[4872]: I0127 07:06:37.026469 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:37 crc kubenswrapper[4872]: I0127 07:06:37.965781 4872 generic.go:334] "Generic (PLEG): container finished" podID="d7c87876-5377-48d5-9648-b761334f75c7" containerID="5b8623e39362efac6742f43b6cbe3d0345f1421998461f6ba3543466bcf8ab04" exitCode=0 Jan 27 07:06:37 crc kubenswrapper[4872]: I0127 07:06:37.965821 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" event={"ID":"d7c87876-5377-48d5-9648-b761334f75c7","Type":"ContainerDied","Data":"5b8623e39362efac6742f43b6cbe3d0345f1421998461f6ba3543466bcf8ab04"} Jan 27 07:06:38 crc kubenswrapper[4872]: I0127 07:06:38.486285 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g77cj"] Jan 27 07:06:38 crc kubenswrapper[4872]: I0127 07:06:38.971557 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g77cj" podUID="74a26b0d-dfd4-44f9-ada4-2335c75b57b5" containerName="registry-server" containerID="cri-o://b4261331a3a71ee03d39afe75a8ee366ede748ade3f1ed0bc906b1f55ebc85f1" gracePeriod=2 Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.198143 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.258602 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xxhv\" (UniqueName: \"kubernetes.io/projected/d7c87876-5377-48d5-9648-b761334f75c7-kube-api-access-4xxhv\") pod \"d7c87876-5377-48d5-9648-b761334f75c7\" (UID: \"d7c87876-5377-48d5-9648-b761334f75c7\") " Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.258660 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7c87876-5377-48d5-9648-b761334f75c7-bundle\") pod \"d7c87876-5377-48d5-9648-b761334f75c7\" (UID: \"d7c87876-5377-48d5-9648-b761334f75c7\") " Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.258793 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7c87876-5377-48d5-9648-b761334f75c7-util\") pod \"d7c87876-5377-48d5-9648-b761334f75c7\" (UID: \"d7c87876-5377-48d5-9648-b761334f75c7\") " Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.259775 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7c87876-5377-48d5-9648-b761334f75c7-bundle" (OuterVolumeSpecName: "bundle") pod "d7c87876-5377-48d5-9648-b761334f75c7" (UID: "d7c87876-5377-48d5-9648-b761334f75c7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.266057 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c87876-5377-48d5-9648-b761334f75c7-kube-api-access-4xxhv" (OuterVolumeSpecName: "kube-api-access-4xxhv") pod "d7c87876-5377-48d5-9648-b761334f75c7" (UID: "d7c87876-5377-48d5-9648-b761334f75c7"). InnerVolumeSpecName "kube-api-access-4xxhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.271944 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7c87876-5377-48d5-9648-b761334f75c7-util" (OuterVolumeSpecName: "util") pod "d7c87876-5377-48d5-9648-b761334f75c7" (UID: "d7c87876-5377-48d5-9648-b761334f75c7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.351965 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.361779 4872 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7c87876-5377-48d5-9648-b761334f75c7-util\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.361829 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xxhv\" (UniqueName: \"kubernetes.io/projected/d7c87876-5377-48d5-9648-b761334f75c7-kube-api-access-4xxhv\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.361867 4872 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7c87876-5377-48d5-9648-b761334f75c7-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.463039 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dlzd\" (UniqueName: \"kubernetes.io/projected/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-kube-api-access-5dlzd\") pod \"74a26b0d-dfd4-44f9-ada4-2335c75b57b5\" (UID: \"74a26b0d-dfd4-44f9-ada4-2335c75b57b5\") " Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.463131 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-catalog-content\") pod \"74a26b0d-dfd4-44f9-ada4-2335c75b57b5\" (UID: \"74a26b0d-dfd4-44f9-ada4-2335c75b57b5\") " Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.463174 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-utilities\") pod \"74a26b0d-dfd4-44f9-ada4-2335c75b57b5\" (UID: \"74a26b0d-dfd4-44f9-ada4-2335c75b57b5\") " Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.464241 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-utilities" (OuterVolumeSpecName: "utilities") pod "74a26b0d-dfd4-44f9-ada4-2335c75b57b5" (UID: "74a26b0d-dfd4-44f9-ada4-2335c75b57b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.466125 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-kube-api-access-5dlzd" (OuterVolumeSpecName: "kube-api-access-5dlzd") pod "74a26b0d-dfd4-44f9-ada4-2335c75b57b5" (UID: "74a26b0d-dfd4-44f9-ada4-2335c75b57b5"). InnerVolumeSpecName "kube-api-access-5dlzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.564608 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.564646 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dlzd\" (UniqueName: \"kubernetes.io/projected/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-kube-api-access-5dlzd\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.662348 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74a26b0d-dfd4-44f9-ada4-2335c75b57b5" (UID: "74a26b0d-dfd4-44f9-ada4-2335c75b57b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.666594 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a26b0d-dfd4-44f9-ada4-2335c75b57b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.978109 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" event={"ID":"d7c87876-5377-48d5-9648-b761334f75c7","Type":"ContainerDied","Data":"3df552eef53b89321b69b2ac341a5d5b078ac6bac8516e4f69d6017fff74d0bb"} Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.978160 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3df552eef53b89321b69b2ac341a5d5b078ac6bac8516e4f69d6017fff74d0bb" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.978160 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.979857 4872 generic.go:334] "Generic (PLEG): container finished" podID="74a26b0d-dfd4-44f9-ada4-2335c75b57b5" containerID="b4261331a3a71ee03d39afe75a8ee366ede748ade3f1ed0bc906b1f55ebc85f1" exitCode=0 Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.979898 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g77cj" event={"ID":"74a26b0d-dfd4-44f9-ada4-2335c75b57b5","Type":"ContainerDied","Data":"b4261331a3a71ee03d39afe75a8ee366ede748ade3f1ed0bc906b1f55ebc85f1"} Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.979906 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g77cj" Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.979923 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g77cj" event={"ID":"74a26b0d-dfd4-44f9-ada4-2335c75b57b5","Type":"ContainerDied","Data":"b5f2e634566a5457e17f45b618ee8cef394a1de18ab5ebab94ec4f6168fc0754"} Jan 27 07:06:39 crc kubenswrapper[4872]: I0127 07:06:39.979940 4872 scope.go:117] "RemoveContainer" containerID="b4261331a3a71ee03d39afe75a8ee366ede748ade3f1ed0bc906b1f55ebc85f1" Jan 27 07:06:40 crc kubenswrapper[4872]: I0127 07:06:40.005138 4872 scope.go:117] "RemoveContainer" containerID="56d007baebb010f9a0c0f8b92b4fce98540f6cc0f049b71254781bef78ddfef4" Jan 27 07:06:40 crc kubenswrapper[4872]: I0127 07:06:40.017010 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g77cj"] Jan 27 07:06:40 crc kubenswrapper[4872]: I0127 07:06:40.020481 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g77cj"] Jan 27 07:06:40 crc kubenswrapper[4872]: I0127 07:06:40.041696 4872 scope.go:117] "RemoveContainer" containerID="c96f952af6d8f76a0844ec9ce68c95037dd453101e35ab8878af9517f0dbc749" Jan 27 07:06:40 crc kubenswrapper[4872]: I0127 07:06:40.055720 4872 scope.go:117] "RemoveContainer" containerID="b4261331a3a71ee03d39afe75a8ee366ede748ade3f1ed0bc906b1f55ebc85f1" Jan 27 07:06:40 crc kubenswrapper[4872]: E0127 07:06:40.056064 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4261331a3a71ee03d39afe75a8ee366ede748ade3f1ed0bc906b1f55ebc85f1\": container with ID starting with b4261331a3a71ee03d39afe75a8ee366ede748ade3f1ed0bc906b1f55ebc85f1 not found: ID does not exist" containerID="b4261331a3a71ee03d39afe75a8ee366ede748ade3f1ed0bc906b1f55ebc85f1" Jan 27 07:06:40 crc kubenswrapper[4872]: I0127 07:06:40.056114 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4261331a3a71ee03d39afe75a8ee366ede748ade3f1ed0bc906b1f55ebc85f1"} err="failed to get container status \"b4261331a3a71ee03d39afe75a8ee366ede748ade3f1ed0bc906b1f55ebc85f1\": rpc error: code = NotFound desc = could not find container \"b4261331a3a71ee03d39afe75a8ee366ede748ade3f1ed0bc906b1f55ebc85f1\": container with ID starting with b4261331a3a71ee03d39afe75a8ee366ede748ade3f1ed0bc906b1f55ebc85f1 not found: ID does not exist" Jan 27 07:06:40 crc kubenswrapper[4872]: I0127 07:06:40.056149 4872 scope.go:117] "RemoveContainer" containerID="56d007baebb010f9a0c0f8b92b4fce98540f6cc0f049b71254781bef78ddfef4" Jan 27 07:06:40 crc kubenswrapper[4872]: E0127 07:06:40.056414 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56d007baebb010f9a0c0f8b92b4fce98540f6cc0f049b71254781bef78ddfef4\": container with ID starting with 56d007baebb010f9a0c0f8b92b4fce98540f6cc0f049b71254781bef78ddfef4 not found: ID does not exist" containerID="56d007baebb010f9a0c0f8b92b4fce98540f6cc0f049b71254781bef78ddfef4" Jan 27 07:06:40 crc kubenswrapper[4872]: I0127 07:06:40.056446 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d007baebb010f9a0c0f8b92b4fce98540f6cc0f049b71254781bef78ddfef4"} err="failed to get container status \"56d007baebb010f9a0c0f8b92b4fce98540f6cc0f049b71254781bef78ddfef4\": rpc error: code = NotFound desc = could not find container \"56d007baebb010f9a0c0f8b92b4fce98540f6cc0f049b71254781bef78ddfef4\": container with ID starting with 56d007baebb010f9a0c0f8b92b4fce98540f6cc0f049b71254781bef78ddfef4 not found: ID does not exist" Jan 27 07:06:40 crc kubenswrapper[4872]: I0127 07:06:40.056464 4872 scope.go:117] "RemoveContainer" containerID="c96f952af6d8f76a0844ec9ce68c95037dd453101e35ab8878af9517f0dbc749" Jan 27 07:06:40 crc kubenswrapper[4872]: E0127 07:06:40.056716 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c96f952af6d8f76a0844ec9ce68c95037dd453101e35ab8878af9517f0dbc749\": container with ID starting with c96f952af6d8f76a0844ec9ce68c95037dd453101e35ab8878af9517f0dbc749 not found: ID does not exist" containerID="c96f952af6d8f76a0844ec9ce68c95037dd453101e35ab8878af9517f0dbc749" Jan 27 07:06:40 crc kubenswrapper[4872]: I0127 07:06:40.056745 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c96f952af6d8f76a0844ec9ce68c95037dd453101e35ab8878af9517f0dbc749"} err="failed to get container status \"c96f952af6d8f76a0844ec9ce68c95037dd453101e35ab8878af9517f0dbc749\": rpc error: code = NotFound desc = could not find container \"c96f952af6d8f76a0844ec9ce68c95037dd453101e35ab8878af9517f0dbc749\": container with ID starting with c96f952af6d8f76a0844ec9ce68c95037dd453101e35ab8878af9517f0dbc749 not found: ID does not exist" Jan 27 07:06:40 crc kubenswrapper[4872]: I0127 07:06:40.104741 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a26b0d-dfd4-44f9-ada4-2335c75b57b5" path="/var/lib/kubelet/pods/74a26b0d-dfd4-44f9-ada4-2335c75b57b5/volumes" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.693199 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9trf7"] Jan 27 07:06:45 crc kubenswrapper[4872]: E0127 07:06:45.693788 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a26b0d-dfd4-44f9-ada4-2335c75b57b5" containerName="registry-server" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.693801 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a26b0d-dfd4-44f9-ada4-2335c75b57b5" containerName="registry-server" Jan 27 07:06:45 crc kubenswrapper[4872]: E0127 07:06:45.693813 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cfa7f72-9e39-485f-894a-276893a688e1" containerName="console" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.693819 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cfa7f72-9e39-485f-894a-276893a688e1" containerName="console" Jan 27 07:06:45 crc kubenswrapper[4872]: E0127 07:06:45.693827 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a26b0d-dfd4-44f9-ada4-2335c75b57b5" containerName="extract-utilities" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.693833 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a26b0d-dfd4-44f9-ada4-2335c75b57b5" containerName="extract-utilities" Jan 27 07:06:45 crc kubenswrapper[4872]: E0127 07:06:45.693846 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c87876-5377-48d5-9648-b761334f75c7" containerName="extract" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.693865 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c87876-5377-48d5-9648-b761334f75c7" containerName="extract" Jan 27 07:06:45 crc kubenswrapper[4872]: E0127 07:06:45.693877 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a26b0d-dfd4-44f9-ada4-2335c75b57b5" containerName="extract-content" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.693882 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a26b0d-dfd4-44f9-ada4-2335c75b57b5" containerName="extract-content" Jan 27 07:06:45 crc kubenswrapper[4872]: E0127 07:06:45.693891 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c87876-5377-48d5-9648-b761334f75c7" containerName="pull" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.693896 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c87876-5377-48d5-9648-b761334f75c7" containerName="pull" Jan 27 07:06:45 crc kubenswrapper[4872]: E0127 07:06:45.693906 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c87876-5377-48d5-9648-b761334f75c7" containerName="util" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.693911 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c87876-5377-48d5-9648-b761334f75c7" containerName="util" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.694006 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="74a26b0d-dfd4-44f9-ada4-2335c75b57b5" containerName="registry-server" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.694016 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c87876-5377-48d5-9648-b761334f75c7" containerName="extract" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.694028 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cfa7f72-9e39-485f-894a-276893a688e1" containerName="console" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.694728 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.721546 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9trf7"] Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.837066 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-catalog-content\") pod \"redhat-marketplace-9trf7\" (UID: \"88fc6a15-c3e3-4a5e-8c09-c72004b0eded\") " pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.837384 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-utilities\") pod \"redhat-marketplace-9trf7\" (UID: \"88fc6a15-c3e3-4a5e-8c09-c72004b0eded\") " pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.837426 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zhd\" (UniqueName: \"kubernetes.io/projected/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-kube-api-access-r5zhd\") pod \"redhat-marketplace-9trf7\" (UID: \"88fc6a15-c3e3-4a5e-8c09-c72004b0eded\") " pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.939212 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-utilities\") pod \"redhat-marketplace-9trf7\" (UID: \"88fc6a15-c3e3-4a5e-8c09-c72004b0eded\") " pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.939269 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5zhd\" (UniqueName: \"kubernetes.io/projected/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-kube-api-access-r5zhd\") pod \"redhat-marketplace-9trf7\" (UID: \"88fc6a15-c3e3-4a5e-8c09-c72004b0eded\") " pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.939287 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-catalog-content\") pod \"redhat-marketplace-9trf7\" (UID: \"88fc6a15-c3e3-4a5e-8c09-c72004b0eded\") " pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.939889 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-catalog-content\") pod \"redhat-marketplace-9trf7\" (UID: \"88fc6a15-c3e3-4a5e-8c09-c72004b0eded\") " pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.939903 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-utilities\") pod \"redhat-marketplace-9trf7\" (UID: \"88fc6a15-c3e3-4a5e-8c09-c72004b0eded\") " pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:45 crc kubenswrapper[4872]: I0127 07:06:45.956790 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5zhd\" (UniqueName: \"kubernetes.io/projected/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-kube-api-access-r5zhd\") pod \"redhat-marketplace-9trf7\" (UID: \"88fc6a15-c3e3-4a5e-8c09-c72004b0eded\") " pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:46 crc kubenswrapper[4872]: I0127 07:06:46.010580 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:46 crc kubenswrapper[4872]: I0127 07:06:46.414448 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9trf7"] Jan 27 07:06:46 crc kubenswrapper[4872]: W0127 07:06:46.420927 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88fc6a15_c3e3_4a5e_8c09_c72004b0eded.slice/crio-648e85305d46f0a5aac32c784f3237a354edb412f3932e44309f59ba53c85f50 WatchSource:0}: Error finding container 648e85305d46f0a5aac32c784f3237a354edb412f3932e44309f59ba53c85f50: Status 404 returned error can't find the container with id 648e85305d46f0a5aac32c784f3237a354edb412f3932e44309f59ba53c85f50 Jan 27 07:06:47 crc kubenswrapper[4872]: I0127 07:06:47.327205 4872 generic.go:334] "Generic (PLEG): container finished" podID="88fc6a15-c3e3-4a5e-8c09-c72004b0eded" containerID="4a397b5645ce12f6522b3c0164ab3cc8e79f0efa4a14b162a8fb7c99dc71dee1" exitCode=0 Jan 27 07:06:47 crc kubenswrapper[4872]: I0127 07:06:47.327256 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9trf7" event={"ID":"88fc6a15-c3e3-4a5e-8c09-c72004b0eded","Type":"ContainerDied","Data":"4a397b5645ce12f6522b3c0164ab3cc8e79f0efa4a14b162a8fb7c99dc71dee1"} Jan 27 07:06:47 crc kubenswrapper[4872]: I0127 07:06:47.327283 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9trf7" event={"ID":"88fc6a15-c3e3-4a5e-8c09-c72004b0eded","Type":"ContainerStarted","Data":"648e85305d46f0a5aac32c784f3237a354edb412f3932e44309f59ba53c85f50"} Jan 27 07:06:48 crc kubenswrapper[4872]: I0127 07:06:48.334123 4872 generic.go:334] "Generic (PLEG): container finished" podID="88fc6a15-c3e3-4a5e-8c09-c72004b0eded" containerID="912398745a0f6428d5d3c95438a84a5f16e750eb9c514501a9c61027876b8096" exitCode=0 Jan 27 07:06:48 crc kubenswrapper[4872]: I0127 07:06:48.334160 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9trf7" event={"ID":"88fc6a15-c3e3-4a5e-8c09-c72004b0eded","Type":"ContainerDied","Data":"912398745a0f6428d5d3c95438a84a5f16e750eb9c514501a9c61027876b8096"} Jan 27 07:06:49 crc kubenswrapper[4872]: I0127 07:06:49.342653 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9trf7" event={"ID":"88fc6a15-c3e3-4a5e-8c09-c72004b0eded","Type":"ContainerStarted","Data":"c334b7747dbb93807f9b21a9c3729506bf856404e0b048d40d9e225e9a8910d9"} Jan 27 07:06:49 crc kubenswrapper[4872]: I0127 07:06:49.391371 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9trf7" podStartSLOduration=3.026726121 podStartE2EDuration="4.391351566s" podCreationTimestamp="2026-01-27 07:06:45 +0000 UTC" firstStartedPulling="2026-01-27 07:06:47.328862042 +0000 UTC m=+783.856337238" lastFinishedPulling="2026-01-27 07:06:48.693487487 +0000 UTC m=+785.220962683" observedRunningTime="2026-01-27 07:06:49.390029731 +0000 UTC m=+785.917504937" watchObservedRunningTime="2026-01-27 07:06:49.391351566 +0000 UTC m=+785.918826762" Jan 27 07:06:50 crc kubenswrapper[4872]: I0127 07:06:50.916307 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt"] Jan 27 07:06:50 crc kubenswrapper[4872]: I0127 07:06:50.917025 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt" Jan 27 07:06:50 crc kubenswrapper[4872]: I0127 07:06:50.921455 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 07:06:50 crc kubenswrapper[4872]: I0127 07:06:50.921598 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 07:06:50 crc kubenswrapper[4872]: I0127 07:06:50.921717 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 07:06:50 crc kubenswrapper[4872]: I0127 07:06:50.921757 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 07:06:50 crc kubenswrapper[4872]: I0127 07:06:50.922057 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-64gq6" Jan 27 07:06:50 crc kubenswrapper[4872]: I0127 07:06:50.940718 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt"] Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.100736 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8f8w\" (UniqueName: \"kubernetes.io/projected/92a2f2b1-53a8-414f-9530-b7d01f84a6a9-kube-api-access-t8f8w\") pod \"metallb-operator-controller-manager-db7b997f7-nfdqt\" (UID: \"92a2f2b1-53a8-414f-9530-b7d01f84a6a9\") " pod="metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.100823 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92a2f2b1-53a8-414f-9530-b7d01f84a6a9-webhook-cert\") pod \"metallb-operator-controller-manager-db7b997f7-nfdqt\" (UID: \"92a2f2b1-53a8-414f-9530-b7d01f84a6a9\") " pod="metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.100867 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92a2f2b1-53a8-414f-9530-b7d01f84a6a9-apiservice-cert\") pod \"metallb-operator-controller-manager-db7b997f7-nfdqt\" (UID: \"92a2f2b1-53a8-414f-9530-b7d01f84a6a9\") " pod="metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.201569 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92a2f2b1-53a8-414f-9530-b7d01f84a6a9-webhook-cert\") pod \"metallb-operator-controller-manager-db7b997f7-nfdqt\" (UID: \"92a2f2b1-53a8-414f-9530-b7d01f84a6a9\") " pod="metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.201843 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92a2f2b1-53a8-414f-9530-b7d01f84a6a9-apiservice-cert\") pod \"metallb-operator-controller-manager-db7b997f7-nfdqt\" (UID: \"92a2f2b1-53a8-414f-9530-b7d01f84a6a9\") " pod="metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.201895 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8f8w\" (UniqueName: \"kubernetes.io/projected/92a2f2b1-53a8-414f-9530-b7d01f84a6a9-kube-api-access-t8f8w\") pod \"metallb-operator-controller-manager-db7b997f7-nfdqt\" (UID: \"92a2f2b1-53a8-414f-9530-b7d01f84a6a9\") " pod="metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.207898 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92a2f2b1-53a8-414f-9530-b7d01f84a6a9-apiservice-cert\") pod \"metallb-operator-controller-manager-db7b997f7-nfdqt\" (UID: \"92a2f2b1-53a8-414f-9530-b7d01f84a6a9\") " pod="metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.208644 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92a2f2b1-53a8-414f-9530-b7d01f84a6a9-webhook-cert\") pod \"metallb-operator-controller-manager-db7b997f7-nfdqt\" (UID: \"92a2f2b1-53a8-414f-9530-b7d01f84a6a9\") " pod="metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.219274 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8f8w\" (UniqueName: \"kubernetes.io/projected/92a2f2b1-53a8-414f-9530-b7d01f84a6a9-kube-api-access-t8f8w\") pod \"metallb-operator-controller-manager-db7b997f7-nfdqt\" (UID: \"92a2f2b1-53a8-414f-9530-b7d01f84a6a9\") " pod="metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.230889 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.252265 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b"] Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.252938 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.255282 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.255490 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.256299 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-qvv29" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.269370 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b"] Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.403988 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c59ffc3-ca7a-41d4-885e-dfb0af134941-webhook-cert\") pod \"metallb-operator-webhook-server-78cd4d8b5b-t5s7b\" (UID: \"4c59ffc3-ca7a-41d4-885e-dfb0af134941\") " pod="metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.404086 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c59ffc3-ca7a-41d4-885e-dfb0af134941-apiservice-cert\") pod \"metallb-operator-webhook-server-78cd4d8b5b-t5s7b\" (UID: \"4c59ffc3-ca7a-41d4-885e-dfb0af134941\") " pod="metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.404195 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56ccm\" (UniqueName: \"kubernetes.io/projected/4c59ffc3-ca7a-41d4-885e-dfb0af134941-kube-api-access-56ccm\") pod \"metallb-operator-webhook-server-78cd4d8b5b-t5s7b\" (UID: \"4c59ffc3-ca7a-41d4-885e-dfb0af134941\") " pod="metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.505050 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c59ffc3-ca7a-41d4-885e-dfb0af134941-apiservice-cert\") pod \"metallb-operator-webhook-server-78cd4d8b5b-t5s7b\" (UID: \"4c59ffc3-ca7a-41d4-885e-dfb0af134941\") " pod="metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.505113 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56ccm\" (UniqueName: \"kubernetes.io/projected/4c59ffc3-ca7a-41d4-885e-dfb0af134941-kube-api-access-56ccm\") pod \"metallb-operator-webhook-server-78cd4d8b5b-t5s7b\" (UID: \"4c59ffc3-ca7a-41d4-885e-dfb0af134941\") " pod="metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.505182 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c59ffc3-ca7a-41d4-885e-dfb0af134941-webhook-cert\") pod \"metallb-operator-webhook-server-78cd4d8b5b-t5s7b\" (UID: \"4c59ffc3-ca7a-41d4-885e-dfb0af134941\") " pod="metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.516111 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c59ffc3-ca7a-41d4-885e-dfb0af134941-apiservice-cert\") pod \"metallb-operator-webhook-server-78cd4d8b5b-t5s7b\" (UID: \"4c59ffc3-ca7a-41d4-885e-dfb0af134941\") " pod="metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.518417 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c59ffc3-ca7a-41d4-885e-dfb0af134941-webhook-cert\") pod \"metallb-operator-webhook-server-78cd4d8b5b-t5s7b\" (UID: \"4c59ffc3-ca7a-41d4-885e-dfb0af134941\") " pod="metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.547514 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56ccm\" (UniqueName: \"kubernetes.io/projected/4c59ffc3-ca7a-41d4-885e-dfb0af134941-kube-api-access-56ccm\") pod \"metallb-operator-webhook-server-78cd4d8b5b-t5s7b\" (UID: \"4c59ffc3-ca7a-41d4-885e-dfb0af134941\") " pod="metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.663339 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b" Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.828383 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt"] Jan 27 07:06:51 crc kubenswrapper[4872]: W0127 07:06:51.842176 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92a2f2b1_53a8_414f_9530_b7d01f84a6a9.slice/crio-84d4a0ccf9313389a14c8241acfd70d46bd12071fdfb88c4d903c3e5ec21d829 WatchSource:0}: Error finding container 84d4a0ccf9313389a14c8241acfd70d46bd12071fdfb88c4d903c3e5ec21d829: Status 404 returned error can't find the container with id 84d4a0ccf9313389a14c8241acfd70d46bd12071fdfb88c4d903c3e5ec21d829 Jan 27 07:06:51 crc kubenswrapper[4872]: I0127 07:06:51.957642 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b"] Jan 27 07:06:51 crc kubenswrapper[4872]: W0127 07:06:51.961346 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c59ffc3_ca7a_41d4_885e_dfb0af134941.slice/crio-85ab6ceb703fcf5daf8bfa2890cb44c2566b84770378243121941cf4e8318d69 WatchSource:0}: Error finding container 85ab6ceb703fcf5daf8bfa2890cb44c2566b84770378243121941cf4e8318d69: Status 404 returned error can't find the container with id 85ab6ceb703fcf5daf8bfa2890cb44c2566b84770378243121941cf4e8318d69 Jan 27 07:06:52 crc kubenswrapper[4872]: I0127 07:06:52.366119 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b" event={"ID":"4c59ffc3-ca7a-41d4-885e-dfb0af134941","Type":"ContainerStarted","Data":"85ab6ceb703fcf5daf8bfa2890cb44c2566b84770378243121941cf4e8318d69"} Jan 27 07:06:52 crc kubenswrapper[4872]: I0127 07:06:52.367124 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt" event={"ID":"92a2f2b1-53a8-414f-9530-b7d01f84a6a9","Type":"ContainerStarted","Data":"84d4a0ccf9313389a14c8241acfd70d46bd12071fdfb88c4d903c3e5ec21d829"} Jan 27 07:06:56 crc kubenswrapper[4872]: I0127 07:06:56.011825 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:56 crc kubenswrapper[4872]: I0127 07:06:56.013235 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:56 crc kubenswrapper[4872]: I0127 07:06:56.096660 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:56 crc kubenswrapper[4872]: I0127 07:06:56.399815 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt" event={"ID":"92a2f2b1-53a8-414f-9530-b7d01f84a6a9","Type":"ContainerStarted","Data":"4e62ed9914eda812e095b7a51eeea9482416cdcc6630164f95832388891b0f88"} Jan 27 07:06:56 crc kubenswrapper[4872]: I0127 07:06:56.399920 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt" Jan 27 07:06:56 crc kubenswrapper[4872]: I0127 07:06:56.440214 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt" podStartSLOduration=2.919571009 podStartE2EDuration="6.440195663s" podCreationTimestamp="2026-01-27 07:06:50 +0000 UTC" firstStartedPulling="2026-01-27 07:06:51.847509689 +0000 UTC m=+788.374984875" lastFinishedPulling="2026-01-27 07:06:55.368134333 +0000 UTC m=+791.895609529" observedRunningTime="2026-01-27 07:06:56.428527117 +0000 UTC m=+792.956002423" watchObservedRunningTime="2026-01-27 07:06:56.440195663 +0000 UTC m=+792.967670859" Jan 27 07:06:56 crc kubenswrapper[4872]: I0127 07:06:56.441295 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:57 crc kubenswrapper[4872]: I0127 07:06:57.285498 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9trf7"] Jan 27 07:06:58 crc kubenswrapper[4872]: I0127 07:06:58.410110 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b" event={"ID":"4c59ffc3-ca7a-41d4-885e-dfb0af134941","Type":"ContainerStarted","Data":"cc6b4c3e30a56a02d09a4a50dbe48d3adf10037611abb8e38a9ad0f32efd7c9d"} Jan 27 07:06:58 crc kubenswrapper[4872]: I0127 07:06:58.410554 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9trf7" podUID="88fc6a15-c3e3-4a5e-8c09-c72004b0eded" containerName="registry-server" containerID="cri-o://c334b7747dbb93807f9b21a9c3729506bf856404e0b048d40d9e225e9a8910d9" gracePeriod=2 Jan 27 07:06:58 crc kubenswrapper[4872]: I0127 07:06:58.436798 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b" podStartSLOduration=2.050183989 podStartE2EDuration="7.436783639s" podCreationTimestamp="2026-01-27 07:06:51 +0000 UTC" firstStartedPulling="2026-01-27 07:06:51.96483556 +0000 UTC m=+788.492310756" lastFinishedPulling="2026-01-27 07:06:57.35143521 +0000 UTC m=+793.878910406" observedRunningTime="2026-01-27 07:06:58.435135105 +0000 UTC m=+794.962610301" watchObservedRunningTime="2026-01-27 07:06:58.436783639 +0000 UTC m=+794.964258835" Jan 27 07:06:58 crc kubenswrapper[4872]: I0127 07:06:58.755731 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:58 crc kubenswrapper[4872]: I0127 07:06:58.915745 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-catalog-content\") pod \"88fc6a15-c3e3-4a5e-8c09-c72004b0eded\" (UID: \"88fc6a15-c3e3-4a5e-8c09-c72004b0eded\") " Jan 27 07:06:58 crc kubenswrapper[4872]: I0127 07:06:58.918102 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-utilities\") pod \"88fc6a15-c3e3-4a5e-8c09-c72004b0eded\" (UID: \"88fc6a15-c3e3-4a5e-8c09-c72004b0eded\") " Jan 27 07:06:58 crc kubenswrapper[4872]: I0127 07:06:58.918250 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5zhd\" (UniqueName: \"kubernetes.io/projected/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-kube-api-access-r5zhd\") pod \"88fc6a15-c3e3-4a5e-8c09-c72004b0eded\" (UID: \"88fc6a15-c3e3-4a5e-8c09-c72004b0eded\") " Jan 27 07:06:58 crc kubenswrapper[4872]: I0127 07:06:58.918967 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-utilities" (OuterVolumeSpecName: "utilities") pod "88fc6a15-c3e3-4a5e-8c09-c72004b0eded" (UID: "88fc6a15-c3e3-4a5e-8c09-c72004b0eded"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:06:58 crc kubenswrapper[4872]: I0127 07:06:58.925987 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-kube-api-access-r5zhd" (OuterVolumeSpecName: "kube-api-access-r5zhd") pod "88fc6a15-c3e3-4a5e-8c09-c72004b0eded" (UID: "88fc6a15-c3e3-4a5e-8c09-c72004b0eded"). InnerVolumeSpecName "kube-api-access-r5zhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:06:58 crc kubenswrapper[4872]: I0127 07:06:58.937595 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88fc6a15-c3e3-4a5e-8c09-c72004b0eded" (UID: "88fc6a15-c3e3-4a5e-8c09-c72004b0eded"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.020060 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5zhd\" (UniqueName: \"kubernetes.io/projected/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-kube-api-access-r5zhd\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.020091 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.020101 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fc6a15-c3e3-4a5e-8c09-c72004b0eded-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.417817 4872 generic.go:334] "Generic (PLEG): container finished" podID="88fc6a15-c3e3-4a5e-8c09-c72004b0eded" containerID="c334b7747dbb93807f9b21a9c3729506bf856404e0b048d40d9e225e9a8910d9" exitCode=0 Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.417884 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9trf7" Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.417909 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9trf7" event={"ID":"88fc6a15-c3e3-4a5e-8c09-c72004b0eded","Type":"ContainerDied","Data":"c334b7747dbb93807f9b21a9c3729506bf856404e0b048d40d9e225e9a8910d9"} Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.417964 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9trf7" event={"ID":"88fc6a15-c3e3-4a5e-8c09-c72004b0eded","Type":"ContainerDied","Data":"648e85305d46f0a5aac32c784f3237a354edb412f3932e44309f59ba53c85f50"} Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.417988 4872 scope.go:117] "RemoveContainer" containerID="c334b7747dbb93807f9b21a9c3729506bf856404e0b048d40d9e225e9a8910d9" Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.418293 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b" Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.448285 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9trf7"] Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.451518 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9trf7"] Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.740861 4872 scope.go:117] "RemoveContainer" containerID="912398745a0f6428d5d3c95438a84a5f16e750eb9c514501a9c61027876b8096" Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.773082 4872 scope.go:117] "RemoveContainer" containerID="4a397b5645ce12f6522b3c0164ab3cc8e79f0efa4a14b162a8fb7c99dc71dee1" Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.789547 4872 scope.go:117] "RemoveContainer" containerID="c334b7747dbb93807f9b21a9c3729506bf856404e0b048d40d9e225e9a8910d9" Jan 27 07:06:59 crc kubenswrapper[4872]: E0127 07:06:59.790027 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c334b7747dbb93807f9b21a9c3729506bf856404e0b048d40d9e225e9a8910d9\": container with ID starting with c334b7747dbb93807f9b21a9c3729506bf856404e0b048d40d9e225e9a8910d9 not found: ID does not exist" containerID="c334b7747dbb93807f9b21a9c3729506bf856404e0b048d40d9e225e9a8910d9" Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.790077 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c334b7747dbb93807f9b21a9c3729506bf856404e0b048d40d9e225e9a8910d9"} err="failed to get container status \"c334b7747dbb93807f9b21a9c3729506bf856404e0b048d40d9e225e9a8910d9\": rpc error: code = NotFound desc = could not find container \"c334b7747dbb93807f9b21a9c3729506bf856404e0b048d40d9e225e9a8910d9\": container with ID starting with c334b7747dbb93807f9b21a9c3729506bf856404e0b048d40d9e225e9a8910d9 not found: ID does not exist" Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.790115 4872 scope.go:117] "RemoveContainer" containerID="912398745a0f6428d5d3c95438a84a5f16e750eb9c514501a9c61027876b8096" Jan 27 07:06:59 crc kubenswrapper[4872]: E0127 07:06:59.790530 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"912398745a0f6428d5d3c95438a84a5f16e750eb9c514501a9c61027876b8096\": container with ID starting with 912398745a0f6428d5d3c95438a84a5f16e750eb9c514501a9c61027876b8096 not found: ID does not exist" containerID="912398745a0f6428d5d3c95438a84a5f16e750eb9c514501a9c61027876b8096" Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.790581 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"912398745a0f6428d5d3c95438a84a5f16e750eb9c514501a9c61027876b8096"} err="failed to get container status \"912398745a0f6428d5d3c95438a84a5f16e750eb9c514501a9c61027876b8096\": rpc error: code = NotFound desc = could not find container \"912398745a0f6428d5d3c95438a84a5f16e750eb9c514501a9c61027876b8096\": container with ID starting with 912398745a0f6428d5d3c95438a84a5f16e750eb9c514501a9c61027876b8096 not found: ID does not exist" Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.790614 4872 scope.go:117] "RemoveContainer" containerID="4a397b5645ce12f6522b3c0164ab3cc8e79f0efa4a14b162a8fb7c99dc71dee1" Jan 27 07:06:59 crc kubenswrapper[4872]: E0127 07:06:59.790944 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a397b5645ce12f6522b3c0164ab3cc8e79f0efa4a14b162a8fb7c99dc71dee1\": container with ID starting with 4a397b5645ce12f6522b3c0164ab3cc8e79f0efa4a14b162a8fb7c99dc71dee1 not found: ID does not exist" containerID="4a397b5645ce12f6522b3c0164ab3cc8e79f0efa4a14b162a8fb7c99dc71dee1" Jan 27 07:06:59 crc kubenswrapper[4872]: I0127 07:06:59.790968 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a397b5645ce12f6522b3c0164ab3cc8e79f0efa4a14b162a8fb7c99dc71dee1"} err="failed to get container status \"4a397b5645ce12f6522b3c0164ab3cc8e79f0efa4a14b162a8fb7c99dc71dee1\": rpc error: code = NotFound desc = could not find container \"4a397b5645ce12f6522b3c0164ab3cc8e79f0efa4a14b162a8fb7c99dc71dee1\": container with ID starting with 4a397b5645ce12f6522b3c0164ab3cc8e79f0efa4a14b162a8fb7c99dc71dee1 not found: ID does not exist" Jan 27 07:07:00 crc kubenswrapper[4872]: I0127 07:07:00.106151 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88fc6a15-c3e3-4a5e-8c09-c72004b0eded" path="/var/lib/kubelet/pods/88fc6a15-c3e3-4a5e-8c09-c72004b0eded/volumes" Jan 27 07:07:11 crc kubenswrapper[4872]: I0127 07:07:11.668207 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-78cd4d8b5b-t5s7b" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.158825 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-46gcn"] Jan 27 07:07:17 crc kubenswrapper[4872]: E0127 07:07:17.159380 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fc6a15-c3e3-4a5e-8c09-c72004b0eded" containerName="extract-utilities" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.159394 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fc6a15-c3e3-4a5e-8c09-c72004b0eded" containerName="extract-utilities" Jan 27 07:07:17 crc kubenswrapper[4872]: E0127 07:07:17.159406 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fc6a15-c3e3-4a5e-8c09-c72004b0eded" containerName="extract-content" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.159413 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fc6a15-c3e3-4a5e-8c09-c72004b0eded" containerName="extract-content" Jan 27 07:07:17 crc kubenswrapper[4872]: E0127 07:07:17.159426 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fc6a15-c3e3-4a5e-8c09-c72004b0eded" containerName="registry-server" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.159434 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fc6a15-c3e3-4a5e-8c09-c72004b0eded" containerName="registry-server" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.159549 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="88fc6a15-c3e3-4a5e-8c09-c72004b0eded" containerName="registry-server" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.160481 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.171447 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46gcn"] Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.259618 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kf8q\" (UniqueName: \"kubernetes.io/projected/300a7d1b-3bea-4880-b35d-8a882069cf48-kube-api-access-6kf8q\") pod \"certified-operators-46gcn\" (UID: \"300a7d1b-3bea-4880-b35d-8a882069cf48\") " pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.260215 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/300a7d1b-3bea-4880-b35d-8a882069cf48-utilities\") pod \"certified-operators-46gcn\" (UID: \"300a7d1b-3bea-4880-b35d-8a882069cf48\") " pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.260269 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/300a7d1b-3bea-4880-b35d-8a882069cf48-catalog-content\") pod \"certified-operators-46gcn\" (UID: \"300a7d1b-3bea-4880-b35d-8a882069cf48\") " pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.361733 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/300a7d1b-3bea-4880-b35d-8a882069cf48-catalog-content\") pod \"certified-operators-46gcn\" (UID: \"300a7d1b-3bea-4880-b35d-8a882069cf48\") " pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.361811 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kf8q\" (UniqueName: \"kubernetes.io/projected/300a7d1b-3bea-4880-b35d-8a882069cf48-kube-api-access-6kf8q\") pod \"certified-operators-46gcn\" (UID: \"300a7d1b-3bea-4880-b35d-8a882069cf48\") " pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.361912 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/300a7d1b-3bea-4880-b35d-8a882069cf48-utilities\") pod \"certified-operators-46gcn\" (UID: \"300a7d1b-3bea-4880-b35d-8a882069cf48\") " pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.362291 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/300a7d1b-3bea-4880-b35d-8a882069cf48-catalog-content\") pod \"certified-operators-46gcn\" (UID: \"300a7d1b-3bea-4880-b35d-8a882069cf48\") " pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.362393 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/300a7d1b-3bea-4880-b35d-8a882069cf48-utilities\") pod \"certified-operators-46gcn\" (UID: \"300a7d1b-3bea-4880-b35d-8a882069cf48\") " pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.380249 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kf8q\" (UniqueName: \"kubernetes.io/projected/300a7d1b-3bea-4880-b35d-8a882069cf48-kube-api-access-6kf8q\") pod \"certified-operators-46gcn\" (UID: \"300a7d1b-3bea-4880-b35d-8a882069cf48\") " pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.474223 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:17 crc kubenswrapper[4872]: I0127 07:07:17.983225 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46gcn"] Jan 27 07:07:17 crc kubenswrapper[4872]: W0127 07:07:17.991530 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod300a7d1b_3bea_4880_b35d_8a882069cf48.slice/crio-452491fb9a73c037d53c9b857c5b1d0945de4625d3339ddba3f582ce18e0afe3 WatchSource:0}: Error finding container 452491fb9a73c037d53c9b857c5b1d0945de4625d3339ddba3f582ce18e0afe3: Status 404 returned error can't find the container with id 452491fb9a73c037d53c9b857c5b1d0945de4625d3339ddba3f582ce18e0afe3 Jan 27 07:07:18 crc kubenswrapper[4872]: I0127 07:07:18.524028 4872 generic.go:334] "Generic (PLEG): container finished" podID="300a7d1b-3bea-4880-b35d-8a882069cf48" containerID="9da1218f485c73114559a2621bcc4e44cb6657e450b8d50786da3514b8521435" exitCode=0 Jan 27 07:07:18 crc kubenswrapper[4872]: I0127 07:07:18.524095 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46gcn" event={"ID":"300a7d1b-3bea-4880-b35d-8a882069cf48","Type":"ContainerDied","Data":"9da1218f485c73114559a2621bcc4e44cb6657e450b8d50786da3514b8521435"} Jan 27 07:07:18 crc kubenswrapper[4872]: I0127 07:07:18.524122 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46gcn" event={"ID":"300a7d1b-3bea-4880-b35d-8a882069cf48","Type":"ContainerStarted","Data":"452491fb9a73c037d53c9b857c5b1d0945de4625d3339ddba3f582ce18e0afe3"} Jan 27 07:07:19 crc kubenswrapper[4872]: I0127 07:07:19.531349 4872 generic.go:334] "Generic (PLEG): container finished" podID="300a7d1b-3bea-4880-b35d-8a882069cf48" containerID="790ec3939a0eaffaef7b1c20d06d0e1583483dffc2222cb6afd3ad9a6b79dbab" exitCode=0 Jan 27 07:07:19 crc kubenswrapper[4872]: I0127 07:07:19.531417 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46gcn" event={"ID":"300a7d1b-3bea-4880-b35d-8a882069cf48","Type":"ContainerDied","Data":"790ec3939a0eaffaef7b1c20d06d0e1583483dffc2222cb6afd3ad9a6b79dbab"} Jan 27 07:07:21 crc kubenswrapper[4872]: I0127 07:07:21.544590 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46gcn" event={"ID":"300a7d1b-3bea-4880-b35d-8a882069cf48","Type":"ContainerStarted","Data":"95596c213cb58179b246f446cbbf05df80fcbe0a21495bfc5f6578936084c57f"} Jan 27 07:07:21 crc kubenswrapper[4872]: I0127 07:07:21.565818 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-46gcn" podStartSLOduration=2.12847862 podStartE2EDuration="4.565797001s" podCreationTimestamp="2026-01-27 07:07:17 +0000 UTC" firstStartedPulling="2026-01-27 07:07:18.526131942 +0000 UTC m=+815.053607138" lastFinishedPulling="2026-01-27 07:07:20.963450323 +0000 UTC m=+817.490925519" observedRunningTime="2026-01-27 07:07:21.562173472 +0000 UTC m=+818.089649008" watchObservedRunningTime="2026-01-27 07:07:21.565797001 +0000 UTC m=+818.093272197" Jan 27 07:07:27 crc kubenswrapper[4872]: I0127 07:07:27.475023 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:27 crc kubenswrapper[4872]: I0127 07:07:27.475547 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:27 crc kubenswrapper[4872]: I0127 07:07:27.512737 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:27 crc kubenswrapper[4872]: I0127 07:07:27.625420 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:27 crc kubenswrapper[4872]: I0127 07:07:27.743070 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-46gcn"] Jan 27 07:07:29 crc kubenswrapper[4872]: I0127 07:07:29.600783 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-46gcn" podUID="300a7d1b-3bea-4880-b35d-8a882069cf48" containerName="registry-server" containerID="cri-o://95596c213cb58179b246f446cbbf05df80fcbe0a21495bfc5f6578936084c57f" gracePeriod=2 Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.432006 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.607930 4872 generic.go:334] "Generic (PLEG): container finished" podID="300a7d1b-3bea-4880-b35d-8a882069cf48" containerID="95596c213cb58179b246f446cbbf05df80fcbe0a21495bfc5f6578936084c57f" exitCode=0 Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.607975 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46gcn" event={"ID":"300a7d1b-3bea-4880-b35d-8a882069cf48","Type":"ContainerDied","Data":"95596c213cb58179b246f446cbbf05df80fcbe0a21495bfc5f6578936084c57f"} Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.608002 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46gcn" event={"ID":"300a7d1b-3bea-4880-b35d-8a882069cf48","Type":"ContainerDied","Data":"452491fb9a73c037d53c9b857c5b1d0945de4625d3339ddba3f582ce18e0afe3"} Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.608017 4872 scope.go:117] "RemoveContainer" containerID="95596c213cb58179b246f446cbbf05df80fcbe0a21495bfc5f6578936084c57f" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.608018 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46gcn" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.624541 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/300a7d1b-3bea-4880-b35d-8a882069cf48-utilities\") pod \"300a7d1b-3bea-4880-b35d-8a882069cf48\" (UID: \"300a7d1b-3bea-4880-b35d-8a882069cf48\") " Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.624634 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kf8q\" (UniqueName: \"kubernetes.io/projected/300a7d1b-3bea-4880-b35d-8a882069cf48-kube-api-access-6kf8q\") pod \"300a7d1b-3bea-4880-b35d-8a882069cf48\" (UID: \"300a7d1b-3bea-4880-b35d-8a882069cf48\") " Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.624660 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/300a7d1b-3bea-4880-b35d-8a882069cf48-catalog-content\") pod \"300a7d1b-3bea-4880-b35d-8a882069cf48\" (UID: \"300a7d1b-3bea-4880-b35d-8a882069cf48\") " Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.625943 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/300a7d1b-3bea-4880-b35d-8a882069cf48-utilities" (OuterVolumeSpecName: "utilities") pod "300a7d1b-3bea-4880-b35d-8a882069cf48" (UID: "300a7d1b-3bea-4880-b35d-8a882069cf48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.626159 4872 scope.go:117] "RemoveContainer" containerID="790ec3939a0eaffaef7b1c20d06d0e1583483dffc2222cb6afd3ad9a6b79dbab" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.640054 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/300a7d1b-3bea-4880-b35d-8a882069cf48-kube-api-access-6kf8q" (OuterVolumeSpecName: "kube-api-access-6kf8q") pod "300a7d1b-3bea-4880-b35d-8a882069cf48" (UID: "300a7d1b-3bea-4880-b35d-8a882069cf48"). InnerVolumeSpecName "kube-api-access-6kf8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.670270 4872 scope.go:117] "RemoveContainer" containerID="9da1218f485c73114559a2621bcc4e44cb6657e450b8d50786da3514b8521435" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.674875 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/300a7d1b-3bea-4880-b35d-8a882069cf48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "300a7d1b-3bea-4880-b35d-8a882069cf48" (UID: "300a7d1b-3bea-4880-b35d-8a882069cf48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.689080 4872 scope.go:117] "RemoveContainer" containerID="95596c213cb58179b246f446cbbf05df80fcbe0a21495bfc5f6578936084c57f" Jan 27 07:07:30 crc kubenswrapper[4872]: E0127 07:07:30.689589 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95596c213cb58179b246f446cbbf05df80fcbe0a21495bfc5f6578936084c57f\": container with ID starting with 95596c213cb58179b246f446cbbf05df80fcbe0a21495bfc5f6578936084c57f not found: ID does not exist" containerID="95596c213cb58179b246f446cbbf05df80fcbe0a21495bfc5f6578936084c57f" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.689625 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95596c213cb58179b246f446cbbf05df80fcbe0a21495bfc5f6578936084c57f"} err="failed to get container status \"95596c213cb58179b246f446cbbf05df80fcbe0a21495bfc5f6578936084c57f\": rpc error: code = NotFound desc = could not find container \"95596c213cb58179b246f446cbbf05df80fcbe0a21495bfc5f6578936084c57f\": container with ID starting with 95596c213cb58179b246f446cbbf05df80fcbe0a21495bfc5f6578936084c57f not found: ID does not exist" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.689649 4872 scope.go:117] "RemoveContainer" containerID="790ec3939a0eaffaef7b1c20d06d0e1583483dffc2222cb6afd3ad9a6b79dbab" Jan 27 07:07:30 crc kubenswrapper[4872]: E0127 07:07:30.689959 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"790ec3939a0eaffaef7b1c20d06d0e1583483dffc2222cb6afd3ad9a6b79dbab\": container with ID starting with 790ec3939a0eaffaef7b1c20d06d0e1583483dffc2222cb6afd3ad9a6b79dbab not found: ID does not exist" containerID="790ec3939a0eaffaef7b1c20d06d0e1583483dffc2222cb6afd3ad9a6b79dbab" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.689982 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"790ec3939a0eaffaef7b1c20d06d0e1583483dffc2222cb6afd3ad9a6b79dbab"} err="failed to get container status \"790ec3939a0eaffaef7b1c20d06d0e1583483dffc2222cb6afd3ad9a6b79dbab\": rpc error: code = NotFound desc = could not find container \"790ec3939a0eaffaef7b1c20d06d0e1583483dffc2222cb6afd3ad9a6b79dbab\": container with ID starting with 790ec3939a0eaffaef7b1c20d06d0e1583483dffc2222cb6afd3ad9a6b79dbab not found: ID does not exist" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.689998 4872 scope.go:117] "RemoveContainer" containerID="9da1218f485c73114559a2621bcc4e44cb6657e450b8d50786da3514b8521435" Jan 27 07:07:30 crc kubenswrapper[4872]: E0127 07:07:30.690378 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9da1218f485c73114559a2621bcc4e44cb6657e450b8d50786da3514b8521435\": container with ID starting with 9da1218f485c73114559a2621bcc4e44cb6657e450b8d50786da3514b8521435 not found: ID does not exist" containerID="9da1218f485c73114559a2621bcc4e44cb6657e450b8d50786da3514b8521435" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.690407 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9da1218f485c73114559a2621bcc4e44cb6657e450b8d50786da3514b8521435"} err="failed to get container status \"9da1218f485c73114559a2621bcc4e44cb6657e450b8d50786da3514b8521435\": rpc error: code = NotFound desc = could not find container \"9da1218f485c73114559a2621bcc4e44cb6657e450b8d50786da3514b8521435\": container with ID starting with 9da1218f485c73114559a2621bcc4e44cb6657e450b8d50786da3514b8521435 not found: ID does not exist" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.726010 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/300a7d1b-3bea-4880-b35d-8a882069cf48-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.726042 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kf8q\" (UniqueName: \"kubernetes.io/projected/300a7d1b-3bea-4880-b35d-8a882069cf48-kube-api-access-6kf8q\") on node \"crc\" DevicePath \"\"" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.726053 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/300a7d1b-3bea-4880-b35d-8a882069cf48-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.939067 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-46gcn"] Jan 27 07:07:30 crc kubenswrapper[4872]: I0127 07:07:30.942774 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-46gcn"] Jan 27 07:07:31 crc kubenswrapper[4872]: I0127 07:07:31.233426 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-db7b997f7-nfdqt" Jan 27 07:07:31 crc kubenswrapper[4872]: I0127 07:07:31.942468 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4wv5f"] Jan 27 07:07:31 crc kubenswrapper[4872]: E0127 07:07:31.942974 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="300a7d1b-3bea-4880-b35d-8a882069cf48" containerName="extract-utilities" Jan 27 07:07:31 crc kubenswrapper[4872]: I0127 07:07:31.942989 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="300a7d1b-3bea-4880-b35d-8a882069cf48" containerName="extract-utilities" Jan 27 07:07:31 crc kubenswrapper[4872]: E0127 07:07:31.943000 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="300a7d1b-3bea-4880-b35d-8a882069cf48" containerName="registry-server" Jan 27 07:07:31 crc kubenswrapper[4872]: I0127 07:07:31.943006 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="300a7d1b-3bea-4880-b35d-8a882069cf48" containerName="registry-server" Jan 27 07:07:31 crc kubenswrapper[4872]: E0127 07:07:31.943021 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="300a7d1b-3bea-4880-b35d-8a882069cf48" containerName="extract-content" Jan 27 07:07:31 crc kubenswrapper[4872]: I0127 07:07:31.943027 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="300a7d1b-3bea-4880-b35d-8a882069cf48" containerName="extract-content" Jan 27 07:07:31 crc kubenswrapper[4872]: I0127 07:07:31.943131 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="300a7d1b-3bea-4880-b35d-8a882069cf48" containerName="registry-server" Jan 27 07:07:31 crc kubenswrapper[4872]: I0127 07:07:31.944907 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:31 crc kubenswrapper[4872]: I0127 07:07:31.952334 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 07:07:31 crc kubenswrapper[4872]: I0127 07:07:31.953649 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 07:07:31 crc kubenswrapper[4872]: I0127 07:07:31.954348 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-dbcx5" Jan 27 07:07:31 crc kubenswrapper[4872]: I0127 07:07:31.961580 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-q8sn8"] Jan 27 07:07:31 crc kubenswrapper[4872]: I0127 07:07:31.962248 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q8sn8" Jan 27 07:07:31 crc kubenswrapper[4872]: I0127 07:07:31.963725 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 07:07:31 crc kubenswrapper[4872]: I0127 07:07:31.972943 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-q8sn8"] Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.068197 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-zkp6s"] Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.069335 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-zkp6s" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.070904 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.071751 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.071900 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-pqlgj" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.072787 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.088908 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-7tfw6"] Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.089767 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7tfw6" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.091523 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.108657 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="300a7d1b-3bea-4880-b35d-8a882069cf48" path="/var/lib/kubelet/pods/300a7d1b-3bea-4880-b35d-8a882069cf48/volumes" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.112186 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7tfw6"] Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.141979 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvnjw\" (UniqueName: \"kubernetes.io/projected/b4f2e3f7-8beb-490a-a856-5b2be541e665-kube-api-access-nvnjw\") pod \"frr-k8s-webhook-server-7df86c4f6c-q8sn8\" (UID: \"b4f2e3f7-8beb-490a-a856-5b2be541e665\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q8sn8" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.142045 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/82869486-1dcb-488a-9c92-3faf8b96df23-frr-conf\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.142088 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/82869486-1dcb-488a-9c92-3faf8b96df23-reloader\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.142136 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/82869486-1dcb-488a-9c92-3faf8b96df23-frr-startup\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.142168 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/82869486-1dcb-488a-9c92-3faf8b96df23-metrics\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.142267 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4f2e3f7-8beb-490a-a856-5b2be541e665-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-q8sn8\" (UID: \"b4f2e3f7-8beb-490a-a856-5b2be541e665\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q8sn8" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.142355 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j5qf\" (UniqueName: \"kubernetes.io/projected/82869486-1dcb-488a-9c92-3faf8b96df23-kube-api-access-5j5qf\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.142412 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/82869486-1dcb-488a-9c92-3faf8b96df23-frr-sockets\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.142490 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82869486-1dcb-488a-9c92-3faf8b96df23-metrics-certs\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.244103 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/82869486-1dcb-488a-9c92-3faf8b96df23-frr-conf\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.244153 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/82869486-1dcb-488a-9c92-3faf8b96df23-reloader\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.244221 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06c5e81b-5f58-4c7a-abec-38e071749a33-cert\") pod \"controller-6968d8fdc4-7tfw6\" (UID: \"06c5e81b-5f58-4c7a-abec-38e071749a33\") " pod="metallb-system/controller-6968d8fdc4-7tfw6" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.244270 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/82869486-1dcb-488a-9c92-3faf8b96df23-frr-startup\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.244297 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1c33aef4-e024-4cf6-9abc-795cd0b04475-memberlist\") pod \"speaker-zkp6s\" (UID: \"1c33aef4-e024-4cf6-9abc-795cd0b04475\") " pod="metallb-system/speaker-zkp6s" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.244343 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/82869486-1dcb-488a-9c92-3faf8b96df23-metrics\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.244373 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd2gq\" (UniqueName: \"kubernetes.io/projected/06c5e81b-5f58-4c7a-abec-38e071749a33-kube-api-access-rd2gq\") pod \"controller-6968d8fdc4-7tfw6\" (UID: \"06c5e81b-5f58-4c7a-abec-38e071749a33\") " pod="metallb-system/controller-6968d8fdc4-7tfw6" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.244401 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4f2e3f7-8beb-490a-a856-5b2be541e665-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-q8sn8\" (UID: \"b4f2e3f7-8beb-490a-a856-5b2be541e665\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q8sn8" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.244463 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j5qf\" (UniqueName: \"kubernetes.io/projected/82869486-1dcb-488a-9c92-3faf8b96df23-kube-api-access-5j5qf\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.244510 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1c33aef4-e024-4cf6-9abc-795cd0b04475-metallb-excludel2\") pod \"speaker-zkp6s\" (UID: \"1c33aef4-e024-4cf6-9abc-795cd0b04475\") " pod="metallb-system/speaker-zkp6s" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.244536 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/82869486-1dcb-488a-9c92-3faf8b96df23-frr-sockets\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.244558 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06c5e81b-5f58-4c7a-abec-38e071749a33-metrics-certs\") pod \"controller-6968d8fdc4-7tfw6\" (UID: \"06c5e81b-5f58-4c7a-abec-38e071749a33\") " pod="metallb-system/controller-6968d8fdc4-7tfw6" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.244609 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82869486-1dcb-488a-9c92-3faf8b96df23-metrics-certs\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.244712 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c33aef4-e024-4cf6-9abc-795cd0b04475-metrics-certs\") pod \"speaker-zkp6s\" (UID: \"1c33aef4-e024-4cf6-9abc-795cd0b04475\") " pod="metallb-system/speaker-zkp6s" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.244814 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvnjw\" (UniqueName: \"kubernetes.io/projected/b4f2e3f7-8beb-490a-a856-5b2be541e665-kube-api-access-nvnjw\") pod \"frr-k8s-webhook-server-7df86c4f6c-q8sn8\" (UID: \"b4f2e3f7-8beb-490a-a856-5b2be541e665\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q8sn8" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.244835 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zdxw\" (UniqueName: \"kubernetes.io/projected/1c33aef4-e024-4cf6-9abc-795cd0b04475-kube-api-access-8zdxw\") pod \"speaker-zkp6s\" (UID: \"1c33aef4-e024-4cf6-9abc-795cd0b04475\") " pod="metallb-system/speaker-zkp6s" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.245005 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/82869486-1dcb-488a-9c92-3faf8b96df23-frr-conf\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.245329 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/82869486-1dcb-488a-9c92-3faf8b96df23-reloader\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.245359 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/82869486-1dcb-488a-9c92-3faf8b96df23-frr-sockets\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.245493 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/82869486-1dcb-488a-9c92-3faf8b96df23-metrics\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.245690 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/82869486-1dcb-488a-9c92-3faf8b96df23-frr-startup\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.252414 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82869486-1dcb-488a-9c92-3faf8b96df23-metrics-certs\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.252884 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4f2e3f7-8beb-490a-a856-5b2be541e665-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-q8sn8\" (UID: \"b4f2e3f7-8beb-490a-a856-5b2be541e665\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q8sn8" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.263425 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j5qf\" (UniqueName: \"kubernetes.io/projected/82869486-1dcb-488a-9c92-3faf8b96df23-kube-api-access-5j5qf\") pod \"frr-k8s-4wv5f\" (UID: \"82869486-1dcb-488a-9c92-3faf8b96df23\") " pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.263538 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvnjw\" (UniqueName: \"kubernetes.io/projected/b4f2e3f7-8beb-490a-a856-5b2be541e665-kube-api-access-nvnjw\") pod \"frr-k8s-webhook-server-7df86c4f6c-q8sn8\" (UID: \"b4f2e3f7-8beb-490a-a856-5b2be541e665\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q8sn8" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.278338 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q8sn8" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.345124 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c33aef4-e024-4cf6-9abc-795cd0b04475-metrics-certs\") pod \"speaker-zkp6s\" (UID: \"1c33aef4-e024-4cf6-9abc-795cd0b04475\") " pod="metallb-system/speaker-zkp6s" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.345168 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zdxw\" (UniqueName: \"kubernetes.io/projected/1c33aef4-e024-4cf6-9abc-795cd0b04475-kube-api-access-8zdxw\") pod \"speaker-zkp6s\" (UID: \"1c33aef4-e024-4cf6-9abc-795cd0b04475\") " pod="metallb-system/speaker-zkp6s" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.345203 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06c5e81b-5f58-4c7a-abec-38e071749a33-cert\") pod \"controller-6968d8fdc4-7tfw6\" (UID: \"06c5e81b-5f58-4c7a-abec-38e071749a33\") " pod="metallb-system/controller-6968d8fdc4-7tfw6" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.345225 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1c33aef4-e024-4cf6-9abc-795cd0b04475-memberlist\") pod \"speaker-zkp6s\" (UID: \"1c33aef4-e024-4cf6-9abc-795cd0b04475\") " pod="metallb-system/speaker-zkp6s" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.345251 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd2gq\" (UniqueName: \"kubernetes.io/projected/06c5e81b-5f58-4c7a-abec-38e071749a33-kube-api-access-rd2gq\") pod \"controller-6968d8fdc4-7tfw6\" (UID: \"06c5e81b-5f58-4c7a-abec-38e071749a33\") " pod="metallb-system/controller-6968d8fdc4-7tfw6" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.345282 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1c33aef4-e024-4cf6-9abc-795cd0b04475-metallb-excludel2\") pod \"speaker-zkp6s\" (UID: \"1c33aef4-e024-4cf6-9abc-795cd0b04475\") " pod="metallb-system/speaker-zkp6s" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.345309 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06c5e81b-5f58-4c7a-abec-38e071749a33-metrics-certs\") pod \"controller-6968d8fdc4-7tfw6\" (UID: \"06c5e81b-5f58-4c7a-abec-38e071749a33\") " pod="metallb-system/controller-6968d8fdc4-7tfw6" Jan 27 07:07:32 crc kubenswrapper[4872]: E0127 07:07:32.345412 4872 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 27 07:07:32 crc kubenswrapper[4872]: E0127 07:07:32.345461 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06c5e81b-5f58-4c7a-abec-38e071749a33-metrics-certs podName:06c5e81b-5f58-4c7a-abec-38e071749a33 nodeName:}" failed. No retries permitted until 2026-01-27 07:07:32.845445223 +0000 UTC m=+829.372920419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/06c5e81b-5f58-4c7a-abec-38e071749a33-metrics-certs") pod "controller-6968d8fdc4-7tfw6" (UID: "06c5e81b-5f58-4c7a-abec-38e071749a33") : secret "controller-certs-secret" not found Jan 27 07:07:32 crc kubenswrapper[4872]: E0127 07:07:32.345482 4872 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 07:07:32 crc kubenswrapper[4872]: E0127 07:07:32.345524 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c33aef4-e024-4cf6-9abc-795cd0b04475-memberlist podName:1c33aef4-e024-4cf6-9abc-795cd0b04475 nodeName:}" failed. No retries permitted until 2026-01-27 07:07:32.845510685 +0000 UTC m=+829.372985881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1c33aef4-e024-4cf6-9abc-795cd0b04475-memberlist") pod "speaker-zkp6s" (UID: "1c33aef4-e024-4cf6-9abc-795cd0b04475") : secret "metallb-memberlist" not found Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.346203 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1c33aef4-e024-4cf6-9abc-795cd0b04475-metallb-excludel2\") pod \"speaker-zkp6s\" (UID: \"1c33aef4-e024-4cf6-9abc-795cd0b04475\") " pod="metallb-system/speaker-zkp6s" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.350997 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c33aef4-e024-4cf6-9abc-795cd0b04475-metrics-certs\") pod \"speaker-zkp6s\" (UID: \"1c33aef4-e024-4cf6-9abc-795cd0b04475\") " pod="metallb-system/speaker-zkp6s" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.360004 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06c5e81b-5f58-4c7a-abec-38e071749a33-cert\") pod \"controller-6968d8fdc4-7tfw6\" (UID: \"06c5e81b-5f58-4c7a-abec-38e071749a33\") " pod="metallb-system/controller-6968d8fdc4-7tfw6" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.383986 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd2gq\" (UniqueName: \"kubernetes.io/projected/06c5e81b-5f58-4c7a-abec-38e071749a33-kube-api-access-rd2gq\") pod \"controller-6968d8fdc4-7tfw6\" (UID: \"06c5e81b-5f58-4c7a-abec-38e071749a33\") " pod="metallb-system/controller-6968d8fdc4-7tfw6" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.384881 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zdxw\" (UniqueName: \"kubernetes.io/projected/1c33aef4-e024-4cf6-9abc-795cd0b04475-kube-api-access-8zdxw\") pod \"speaker-zkp6s\" (UID: \"1c33aef4-e024-4cf6-9abc-795cd0b04475\") " pod="metallb-system/speaker-zkp6s" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.562718 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.720411 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-q8sn8"] Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.855293 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1c33aef4-e024-4cf6-9abc-795cd0b04475-memberlist\") pod \"speaker-zkp6s\" (UID: \"1c33aef4-e024-4cf6-9abc-795cd0b04475\") " pod="metallb-system/speaker-zkp6s" Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.855355 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06c5e81b-5f58-4c7a-abec-38e071749a33-metrics-certs\") pod \"controller-6968d8fdc4-7tfw6\" (UID: \"06c5e81b-5f58-4c7a-abec-38e071749a33\") " pod="metallb-system/controller-6968d8fdc4-7tfw6" Jan 27 07:07:32 crc kubenswrapper[4872]: E0127 07:07:32.855532 4872 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 07:07:32 crc kubenswrapper[4872]: E0127 07:07:32.855618 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c33aef4-e024-4cf6-9abc-795cd0b04475-memberlist podName:1c33aef4-e024-4cf6-9abc-795cd0b04475 nodeName:}" failed. No retries permitted until 2026-01-27 07:07:33.85559754 +0000 UTC m=+830.383072926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1c33aef4-e024-4cf6-9abc-795cd0b04475-memberlist") pod "speaker-zkp6s" (UID: "1c33aef4-e024-4cf6-9abc-795cd0b04475") : secret "metallb-memberlist" not found Jan 27 07:07:32 crc kubenswrapper[4872]: I0127 07:07:32.859826 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06c5e81b-5f58-4c7a-abec-38e071749a33-metrics-certs\") pod \"controller-6968d8fdc4-7tfw6\" (UID: \"06c5e81b-5f58-4c7a-abec-38e071749a33\") " pod="metallb-system/controller-6968d8fdc4-7tfw6" Jan 27 07:07:33 crc kubenswrapper[4872]: I0127 07:07:33.005217 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7tfw6" Jan 27 07:07:33 crc kubenswrapper[4872]: I0127 07:07:33.193298 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7tfw6"] Jan 27 07:07:33 crc kubenswrapper[4872]: W0127 07:07:33.198209 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06c5e81b_5f58_4c7a_abec_38e071749a33.slice/crio-e0f326c425af981ac18d96a4b48afc4b605c2b81cf2415c0182fcaee120c9cf5 WatchSource:0}: Error finding container e0f326c425af981ac18d96a4b48afc4b605c2b81cf2415c0182fcaee120c9cf5: Status 404 returned error can't find the container with id e0f326c425af981ac18d96a4b48afc4b605c2b81cf2415c0182fcaee120c9cf5 Jan 27 07:07:33 crc kubenswrapper[4872]: I0127 07:07:33.624895 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7tfw6" event={"ID":"06c5e81b-5f58-4c7a-abec-38e071749a33","Type":"ContainerStarted","Data":"67a57dbf5034be450982583afd4b137e22cae732b1ddb8a996f2e9e6b96754e1"} Jan 27 07:07:33 crc kubenswrapper[4872]: I0127 07:07:33.625296 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7tfw6" event={"ID":"06c5e81b-5f58-4c7a-abec-38e071749a33","Type":"ContainerStarted","Data":"41e4e38163ee357e382e548973254618b7952da49e763601d58859c185b7405d"} Jan 27 07:07:33 crc kubenswrapper[4872]: I0127 07:07:33.625311 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7tfw6" event={"ID":"06c5e81b-5f58-4c7a-abec-38e071749a33","Type":"ContainerStarted","Data":"e0f326c425af981ac18d96a4b48afc4b605c2b81cf2415c0182fcaee120c9cf5"} Jan 27 07:07:33 crc kubenswrapper[4872]: I0127 07:07:33.626209 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-7tfw6" Jan 27 07:07:33 crc kubenswrapper[4872]: I0127 07:07:33.627024 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wv5f" event={"ID":"82869486-1dcb-488a-9c92-3faf8b96df23","Type":"ContainerStarted","Data":"732676ce9c4808f1595e32afa9fa856554a0946758d02f4a55babb905f51c873"} Jan 27 07:07:33 crc kubenswrapper[4872]: I0127 07:07:33.627951 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q8sn8" event={"ID":"b4f2e3f7-8beb-490a-a856-5b2be541e665","Type":"ContainerStarted","Data":"1071600cfe4b27372a5972c8271d571dbc1db8713a50572f69b32a12ae2c70fc"} Jan 27 07:07:33 crc kubenswrapper[4872]: I0127 07:07:33.645488 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-7tfw6" podStartSLOduration=1.645471935 podStartE2EDuration="1.645471935s" podCreationTimestamp="2026-01-27 07:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:07:33.641325382 +0000 UTC m=+830.168800588" watchObservedRunningTime="2026-01-27 07:07:33.645471935 +0000 UTC m=+830.172947131" Jan 27 07:07:33 crc kubenswrapper[4872]: I0127 07:07:33.872267 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1c33aef4-e024-4cf6-9abc-795cd0b04475-memberlist\") pod \"speaker-zkp6s\" (UID: \"1c33aef4-e024-4cf6-9abc-795cd0b04475\") " pod="metallb-system/speaker-zkp6s" Jan 27 07:07:33 crc kubenswrapper[4872]: I0127 07:07:33.876675 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1c33aef4-e024-4cf6-9abc-795cd0b04475-memberlist\") pod \"speaker-zkp6s\" (UID: \"1c33aef4-e024-4cf6-9abc-795cd0b04475\") " pod="metallb-system/speaker-zkp6s" Jan 27 07:07:33 crc kubenswrapper[4872]: I0127 07:07:33.886233 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-zkp6s" Jan 27 07:07:33 crc kubenswrapper[4872]: W0127 07:07:33.906478 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c33aef4_e024_4cf6_9abc_795cd0b04475.slice/crio-d67762e553e4ce0a85ecdf600c579ff1a25d6f222003049105608d3238dcf134 WatchSource:0}: Error finding container d67762e553e4ce0a85ecdf600c579ff1a25d6f222003049105608d3238dcf134: Status 404 returned error can't find the container with id d67762e553e4ce0a85ecdf600c579ff1a25d6f222003049105608d3238dcf134 Jan 27 07:07:34 crc kubenswrapper[4872]: I0127 07:07:34.636426 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zkp6s" event={"ID":"1c33aef4-e024-4cf6-9abc-795cd0b04475","Type":"ContainerStarted","Data":"42dd131ae4900ce304dee3487ac05942552f1fe064ec3fc2d9f602731ab7b81f"} Jan 27 07:07:34 crc kubenswrapper[4872]: I0127 07:07:34.636737 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zkp6s" event={"ID":"1c33aef4-e024-4cf6-9abc-795cd0b04475","Type":"ContainerStarted","Data":"5e0429f5d3dcd4995c55fdc32f6ca993cc0fd879b519d2c0896edc3e2588da57"} Jan 27 07:07:34 crc kubenswrapper[4872]: I0127 07:07:34.637524 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zkp6s" event={"ID":"1c33aef4-e024-4cf6-9abc-795cd0b04475","Type":"ContainerStarted","Data":"d67762e553e4ce0a85ecdf600c579ff1a25d6f222003049105608d3238dcf134"} Jan 27 07:07:34 crc kubenswrapper[4872]: I0127 07:07:34.666295 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-zkp6s" podStartSLOduration=2.6662735140000002 podStartE2EDuration="2.666273514s" podCreationTimestamp="2026-01-27 07:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:07:34.662293447 +0000 UTC m=+831.189768643" watchObservedRunningTime="2026-01-27 07:07:34.666273514 +0000 UTC m=+831.193748710" Jan 27 07:07:35 crc kubenswrapper[4872]: I0127 07:07:35.645823 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-zkp6s" Jan 27 07:07:41 crc kubenswrapper[4872]: I0127 07:07:41.691991 4872 generic.go:334] "Generic (PLEG): container finished" podID="82869486-1dcb-488a-9c92-3faf8b96df23" containerID="00bc360bc33f6deb253cf1037b2e75748212ea01569b55f84f1fbbd352edcbca" exitCode=0 Jan 27 07:07:41 crc kubenswrapper[4872]: I0127 07:07:41.692044 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wv5f" event={"ID":"82869486-1dcb-488a-9c92-3faf8b96df23","Type":"ContainerDied","Data":"00bc360bc33f6deb253cf1037b2e75748212ea01569b55f84f1fbbd352edcbca"} Jan 27 07:07:41 crc kubenswrapper[4872]: I0127 07:07:41.695290 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q8sn8" event={"ID":"b4f2e3f7-8beb-490a-a856-5b2be541e665","Type":"ContainerStarted","Data":"2cf2dcc96d2de06ca7715f5cd4a5db1d4e7ca4686b1a7dced2647a5d424ebe4e"} Jan 27 07:07:41 crc kubenswrapper[4872]: I0127 07:07:41.695454 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q8sn8" Jan 27 07:07:41 crc kubenswrapper[4872]: I0127 07:07:41.734540 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q8sn8" podStartSLOduration=2.007847042 podStartE2EDuration="10.734523347s" podCreationTimestamp="2026-01-27 07:07:31 +0000 UTC" firstStartedPulling="2026-01-27 07:07:32.730367394 +0000 UTC m=+829.257842590" lastFinishedPulling="2026-01-27 07:07:41.457043699 +0000 UTC m=+837.984518895" observedRunningTime="2026-01-27 07:07:41.730774625 +0000 UTC m=+838.258249831" watchObservedRunningTime="2026-01-27 07:07:41.734523347 +0000 UTC m=+838.261998543" Jan 27 07:07:42 crc kubenswrapper[4872]: I0127 07:07:42.727392 4872 generic.go:334] "Generic (PLEG): container finished" podID="82869486-1dcb-488a-9c92-3faf8b96df23" containerID="99cffe48f504ee0b3e784afe39c0b6c4f81a2cf82a1b068faa8ff88a28a1c72b" exitCode=0 Jan 27 07:07:42 crc kubenswrapper[4872]: I0127 07:07:42.727528 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wv5f" event={"ID":"82869486-1dcb-488a-9c92-3faf8b96df23","Type":"ContainerDied","Data":"99cffe48f504ee0b3e784afe39c0b6c4f81a2cf82a1b068faa8ff88a28a1c72b"} Jan 27 07:07:43 crc kubenswrapper[4872]: I0127 07:07:43.009598 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-7tfw6" Jan 27 07:07:43 crc kubenswrapper[4872]: I0127 07:07:43.739827 4872 generic.go:334] "Generic (PLEG): container finished" podID="82869486-1dcb-488a-9c92-3faf8b96df23" containerID="f67cca62e8ee98bfb227e572f794bea7cff97213320bb0e01e786d774726450d" exitCode=0 Jan 27 07:07:43 crc kubenswrapper[4872]: I0127 07:07:43.739908 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wv5f" event={"ID":"82869486-1dcb-488a-9c92-3faf8b96df23","Type":"ContainerDied","Data":"f67cca62e8ee98bfb227e572f794bea7cff97213320bb0e01e786d774726450d"} Jan 27 07:07:44 crc kubenswrapper[4872]: I0127 07:07:44.752176 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wv5f" event={"ID":"82869486-1dcb-488a-9c92-3faf8b96df23","Type":"ContainerStarted","Data":"9aa8fc33c316a529651b460a8840668109326b62f3af0c8f19f40377262708f1"} Jan 27 07:07:44 crc kubenswrapper[4872]: I0127 07:07:44.752962 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wv5f" event={"ID":"82869486-1dcb-488a-9c92-3faf8b96df23","Type":"ContainerStarted","Data":"60e6f492e51985142bb4bd4d41437ea298a22ecfe4871a61755434497bfcb648"} Jan 27 07:07:44 crc kubenswrapper[4872]: I0127 07:07:44.752987 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wv5f" event={"ID":"82869486-1dcb-488a-9c92-3faf8b96df23","Type":"ContainerStarted","Data":"611e0111de22fa29babc5c0831c65ac5392cf23d7daf736e97f8ed16a5064e06"} Jan 27 07:07:44 crc kubenswrapper[4872]: I0127 07:07:44.752999 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wv5f" event={"ID":"82869486-1dcb-488a-9c92-3faf8b96df23","Type":"ContainerStarted","Data":"cdf9e5039b28aa49c5af2d336d153097ca6d24d73b1a3b89a81274352942d5cb"} Jan 27 07:07:44 crc kubenswrapper[4872]: I0127 07:07:44.753009 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wv5f" event={"ID":"82869486-1dcb-488a-9c92-3faf8b96df23","Type":"ContainerStarted","Data":"b6a0db3cff962e4b7abae6f88d92fa432b988793a3b92b169694a5e4e65bcfd0"} Jan 27 07:07:45 crc kubenswrapper[4872]: I0127 07:07:45.761915 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wv5f" event={"ID":"82869486-1dcb-488a-9c92-3faf8b96df23","Type":"ContainerStarted","Data":"35abf635db2b26962cc5dd5fb93643909b79b4eb57dfc85f8bab7958d50b2165"} Jan 27 07:07:45 crc kubenswrapper[4872]: I0127 07:07:45.763089 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:45 crc kubenswrapper[4872]: I0127 07:07:45.786626 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4wv5f" podStartSLOduration=6.03410505 podStartE2EDuration="14.786609717s" podCreationTimestamp="2026-01-27 07:07:31 +0000 UTC" firstStartedPulling="2026-01-27 07:07:32.686176095 +0000 UTC m=+829.213651291" lastFinishedPulling="2026-01-27 07:07:41.438680762 +0000 UTC m=+837.966155958" observedRunningTime="2026-01-27 07:07:45.78342807 +0000 UTC m=+842.310903276" watchObservedRunningTime="2026-01-27 07:07:45.786609717 +0000 UTC m=+842.314084923" Jan 27 07:07:47 crc kubenswrapper[4872]: I0127 07:07:47.563714 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:47 crc kubenswrapper[4872]: I0127 07:07:47.614343 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:07:52 crc kubenswrapper[4872]: I0127 07:07:52.283921 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-q8sn8" Jan 27 07:07:53 crc kubenswrapper[4872]: I0127 07:07:53.890995 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-zkp6s" Jan 27 07:07:57 crc kubenswrapper[4872]: I0127 07:07:57.077181 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vttbt"] Jan 27 07:07:57 crc kubenswrapper[4872]: I0127 07:07:57.078191 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vttbt" Jan 27 07:07:57 crc kubenswrapper[4872]: I0127 07:07:57.083939 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7hhp9" Jan 27 07:07:57 crc kubenswrapper[4872]: I0127 07:07:57.084326 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 07:07:57 crc kubenswrapper[4872]: I0127 07:07:57.085187 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 07:07:57 crc kubenswrapper[4872]: I0127 07:07:57.102522 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vttbt"] Jan 27 07:07:57 crc kubenswrapper[4872]: I0127 07:07:57.239116 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twgms\" (UniqueName: \"kubernetes.io/projected/fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200-kube-api-access-twgms\") pod \"openstack-operator-index-vttbt\" (UID: \"fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200\") " pod="openstack-operators/openstack-operator-index-vttbt" Jan 27 07:07:57 crc kubenswrapper[4872]: I0127 07:07:57.339926 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twgms\" (UniqueName: \"kubernetes.io/projected/fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200-kube-api-access-twgms\") pod \"openstack-operator-index-vttbt\" (UID: \"fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200\") " pod="openstack-operators/openstack-operator-index-vttbt" Jan 27 07:07:57 crc kubenswrapper[4872]: I0127 07:07:57.357171 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twgms\" (UniqueName: \"kubernetes.io/projected/fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200-kube-api-access-twgms\") pod \"openstack-operator-index-vttbt\" (UID: \"fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200\") " pod="openstack-operators/openstack-operator-index-vttbt" Jan 27 07:07:57 crc kubenswrapper[4872]: I0127 07:07:57.395427 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vttbt" Jan 27 07:07:57 crc kubenswrapper[4872]: I0127 07:07:57.800558 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vttbt"] Jan 27 07:07:58 crc kubenswrapper[4872]: I0127 07:07:58.115193 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vttbt" event={"ID":"fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200","Type":"ContainerStarted","Data":"a74e2207cd5322bccba9885367c4a50feb14431bc7801547ba6853bcca332bc2"} Jan 27 07:08:00 crc kubenswrapper[4872]: I0127 07:08:00.133210 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vttbt" event={"ID":"fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200","Type":"ContainerStarted","Data":"61deeb6d05342791fd96ae17ad3db8a62a4b9f3aecc871cc1a1c85a6a0d5a6fe"} Jan 27 07:08:00 crc kubenswrapper[4872]: I0127 07:08:00.166717 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vttbt" podStartSLOduration=1.1632152119999999 podStartE2EDuration="3.166683311s" podCreationTimestamp="2026-01-27 07:07:57 +0000 UTC" firstStartedPulling="2026-01-27 07:07:57.81029755 +0000 UTC m=+854.337772746" lastFinishedPulling="2026-01-27 07:07:59.813765639 +0000 UTC m=+856.341240845" observedRunningTime="2026-01-27 07:08:00.154920402 +0000 UTC m=+856.682395648" watchObservedRunningTime="2026-01-27 07:08:00.166683311 +0000 UTC m=+856.694158547" Jan 27 07:08:00 crc kubenswrapper[4872]: I0127 07:08:00.248256 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vttbt"] Jan 27 07:08:00 crc kubenswrapper[4872]: I0127 07:08:00.854808 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cp2jp"] Jan 27 07:08:00 crc kubenswrapper[4872]: I0127 07:08:00.856015 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cp2jp" Jan 27 07:08:00 crc kubenswrapper[4872]: I0127 07:08:00.862498 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cp2jp"] Jan 27 07:08:00 crc kubenswrapper[4872]: I0127 07:08:00.919261 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6276\" (UniqueName: \"kubernetes.io/projected/bc2aaa8d-dea7-4dd7-9efd-d4260dcda527-kube-api-access-q6276\") pod \"openstack-operator-index-cp2jp\" (UID: \"bc2aaa8d-dea7-4dd7-9efd-d4260dcda527\") " pod="openstack-operators/openstack-operator-index-cp2jp" Jan 27 07:08:01 crc kubenswrapper[4872]: I0127 07:08:01.020880 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6276\" (UniqueName: \"kubernetes.io/projected/bc2aaa8d-dea7-4dd7-9efd-d4260dcda527-kube-api-access-q6276\") pod \"openstack-operator-index-cp2jp\" (UID: \"bc2aaa8d-dea7-4dd7-9efd-d4260dcda527\") " pod="openstack-operators/openstack-operator-index-cp2jp" Jan 27 07:08:01 crc kubenswrapper[4872]: I0127 07:08:01.042216 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6276\" (UniqueName: \"kubernetes.io/projected/bc2aaa8d-dea7-4dd7-9efd-d4260dcda527-kube-api-access-q6276\") pod \"openstack-operator-index-cp2jp\" (UID: \"bc2aaa8d-dea7-4dd7-9efd-d4260dcda527\") " pod="openstack-operators/openstack-operator-index-cp2jp" Jan 27 07:08:01 crc kubenswrapper[4872]: I0127 07:08:01.180998 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cp2jp" Jan 27 07:08:01 crc kubenswrapper[4872]: I0127 07:08:01.644696 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cp2jp"] Jan 27 07:08:02 crc kubenswrapper[4872]: I0127 07:08:02.145675 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cp2jp" event={"ID":"bc2aaa8d-dea7-4dd7-9efd-d4260dcda527","Type":"ContainerStarted","Data":"85745d7cb06d87dad9fdfff25bfdbbb632f35f5485efaa1abd468bd2fd991624"} Jan 27 07:08:02 crc kubenswrapper[4872]: I0127 07:08:02.146021 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cp2jp" event={"ID":"bc2aaa8d-dea7-4dd7-9efd-d4260dcda527","Type":"ContainerStarted","Data":"c6cbff5c155ac94709934c9eb247b3cead9e8d35f53783399ec3442dca2a4bb1"} Jan 27 07:08:02 crc kubenswrapper[4872]: I0127 07:08:02.145797 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vttbt" podUID="fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200" containerName="registry-server" containerID="cri-o://61deeb6d05342791fd96ae17ad3db8a62a4b9f3aecc871cc1a1c85a6a0d5a6fe" gracePeriod=2 Jan 27 07:08:02 crc kubenswrapper[4872]: I0127 07:08:02.462186 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vttbt" Jan 27 07:08:02 crc kubenswrapper[4872]: I0127 07:08:02.484311 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cp2jp" podStartSLOduration=2.439575235 podStartE2EDuration="2.484289529s" podCreationTimestamp="2026-01-27 07:08:00 +0000 UTC" firstStartedPulling="2026-01-27 07:08:01.646219864 +0000 UTC m=+858.173695060" lastFinishedPulling="2026-01-27 07:08:01.690934158 +0000 UTC m=+858.218409354" observedRunningTime="2026-01-27 07:08:02.171126887 +0000 UTC m=+858.698602083" watchObservedRunningTime="2026-01-27 07:08:02.484289529 +0000 UTC m=+859.011764725" Jan 27 07:08:02 crc kubenswrapper[4872]: I0127 07:08:02.564984 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4wv5f" Jan 27 07:08:02 crc kubenswrapper[4872]: I0127 07:08:02.644189 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twgms\" (UniqueName: \"kubernetes.io/projected/fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200-kube-api-access-twgms\") pod \"fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200\" (UID: \"fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200\") " Jan 27 07:08:02 crc kubenswrapper[4872]: I0127 07:08:02.649061 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200-kube-api-access-twgms" (OuterVolumeSpecName: "kube-api-access-twgms") pod "fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200" (UID: "fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200"). InnerVolumeSpecName "kube-api-access-twgms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:08:02 crc kubenswrapper[4872]: I0127 07:08:02.745881 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twgms\" (UniqueName: \"kubernetes.io/projected/fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200-kube-api-access-twgms\") on node \"crc\" DevicePath \"\"" Jan 27 07:08:03 crc kubenswrapper[4872]: I0127 07:08:03.155094 4872 generic.go:334] "Generic (PLEG): container finished" podID="fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200" containerID="61deeb6d05342791fd96ae17ad3db8a62a4b9f3aecc871cc1a1c85a6a0d5a6fe" exitCode=0 Jan 27 07:08:03 crc kubenswrapper[4872]: I0127 07:08:03.155576 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vttbt" Jan 27 07:08:03 crc kubenswrapper[4872]: I0127 07:08:03.155990 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vttbt" event={"ID":"fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200","Type":"ContainerDied","Data":"61deeb6d05342791fd96ae17ad3db8a62a4b9f3aecc871cc1a1c85a6a0d5a6fe"} Jan 27 07:08:03 crc kubenswrapper[4872]: I0127 07:08:03.156123 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vttbt" event={"ID":"fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200","Type":"ContainerDied","Data":"a74e2207cd5322bccba9885367c4a50feb14431bc7801547ba6853bcca332bc2"} Jan 27 07:08:03 crc kubenswrapper[4872]: I0127 07:08:03.156161 4872 scope.go:117] "RemoveContainer" containerID="61deeb6d05342791fd96ae17ad3db8a62a4b9f3aecc871cc1a1c85a6a0d5a6fe" Jan 27 07:08:03 crc kubenswrapper[4872]: I0127 07:08:03.174584 4872 scope.go:117] "RemoveContainer" containerID="61deeb6d05342791fd96ae17ad3db8a62a4b9f3aecc871cc1a1c85a6a0d5a6fe" Jan 27 07:08:03 crc kubenswrapper[4872]: E0127 07:08:03.175055 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61deeb6d05342791fd96ae17ad3db8a62a4b9f3aecc871cc1a1c85a6a0d5a6fe\": container with ID starting with 61deeb6d05342791fd96ae17ad3db8a62a4b9f3aecc871cc1a1c85a6a0d5a6fe not found: ID does not exist" containerID="61deeb6d05342791fd96ae17ad3db8a62a4b9f3aecc871cc1a1c85a6a0d5a6fe" Jan 27 07:08:03 crc kubenswrapper[4872]: I0127 07:08:03.175104 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61deeb6d05342791fd96ae17ad3db8a62a4b9f3aecc871cc1a1c85a6a0d5a6fe"} err="failed to get container status \"61deeb6d05342791fd96ae17ad3db8a62a4b9f3aecc871cc1a1c85a6a0d5a6fe\": rpc error: code = NotFound desc = could not find container \"61deeb6d05342791fd96ae17ad3db8a62a4b9f3aecc871cc1a1c85a6a0d5a6fe\": container with ID starting with 61deeb6d05342791fd96ae17ad3db8a62a4b9f3aecc871cc1a1c85a6a0d5a6fe not found: ID does not exist" Jan 27 07:08:03 crc kubenswrapper[4872]: I0127 07:08:03.232901 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vttbt"] Jan 27 07:08:03 crc kubenswrapper[4872]: I0127 07:08:03.235189 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vttbt"] Jan 27 07:08:04 crc kubenswrapper[4872]: I0127 07:08:04.106190 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200" path="/var/lib/kubelet/pods/fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200/volumes" Jan 27 07:08:11 crc kubenswrapper[4872]: I0127 07:08:11.182086 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cp2jp" Jan 27 07:08:11 crc kubenswrapper[4872]: I0127 07:08:11.182684 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cp2jp" Jan 27 07:08:11 crc kubenswrapper[4872]: I0127 07:08:11.214677 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cp2jp" Jan 27 07:08:11 crc kubenswrapper[4872]: I0127 07:08:11.264683 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cp2jp" Jan 27 07:08:13 crc kubenswrapper[4872]: I0127 07:08:13.278887 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4"] Jan 27 07:08:13 crc kubenswrapper[4872]: E0127 07:08:13.279146 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200" containerName="registry-server" Jan 27 07:08:13 crc kubenswrapper[4872]: I0127 07:08:13.279159 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200" containerName="registry-server" Jan 27 07:08:13 crc kubenswrapper[4872]: I0127 07:08:13.279266 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9ff033-a7b7-4c8f-9b38-c0ecc74c7200" containerName="registry-server" Jan 27 07:08:13 crc kubenswrapper[4872]: I0127 07:08:13.280085 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" Jan 27 07:08:13 crc kubenswrapper[4872]: I0127 07:08:13.282031 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wqt77" Jan 27 07:08:13 crc kubenswrapper[4872]: I0127 07:08:13.286837 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm7mh\" (UniqueName: \"kubernetes.io/projected/c81f0fb7-1778-4ff0-9a01-a10290da0be6-kube-api-access-tm7mh\") pod \"9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4\" (UID: \"c81f0fb7-1778-4ff0-9a01-a10290da0be6\") " pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" Jan 27 07:08:13 crc kubenswrapper[4872]: I0127 07:08:13.287184 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c81f0fb7-1778-4ff0-9a01-a10290da0be6-util\") pod \"9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4\" (UID: \"c81f0fb7-1778-4ff0-9a01-a10290da0be6\") " pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" Jan 27 07:08:13 crc kubenswrapper[4872]: I0127 07:08:13.287417 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c81f0fb7-1778-4ff0-9a01-a10290da0be6-bundle\") pod \"9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4\" (UID: \"c81f0fb7-1778-4ff0-9a01-a10290da0be6\") " pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" Jan 27 07:08:13 crc kubenswrapper[4872]: I0127 07:08:13.295481 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4"] Jan 27 07:08:13 crc kubenswrapper[4872]: I0127 07:08:13.388531 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c81f0fb7-1778-4ff0-9a01-a10290da0be6-bundle\") pod \"9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4\" (UID: \"c81f0fb7-1778-4ff0-9a01-a10290da0be6\") " pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" Jan 27 07:08:13 crc kubenswrapper[4872]: I0127 07:08:13.388587 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm7mh\" (UniqueName: \"kubernetes.io/projected/c81f0fb7-1778-4ff0-9a01-a10290da0be6-kube-api-access-tm7mh\") pod \"9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4\" (UID: \"c81f0fb7-1778-4ff0-9a01-a10290da0be6\") " pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" Jan 27 07:08:13 crc kubenswrapper[4872]: I0127 07:08:13.388627 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c81f0fb7-1778-4ff0-9a01-a10290da0be6-util\") pod \"9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4\" (UID: \"c81f0fb7-1778-4ff0-9a01-a10290da0be6\") " pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" Jan 27 07:08:13 crc kubenswrapper[4872]: I0127 07:08:13.389052 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c81f0fb7-1778-4ff0-9a01-a10290da0be6-util\") pod \"9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4\" (UID: \"c81f0fb7-1778-4ff0-9a01-a10290da0be6\") " pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" Jan 27 07:08:13 crc kubenswrapper[4872]: I0127 07:08:13.389170 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c81f0fb7-1778-4ff0-9a01-a10290da0be6-bundle\") pod \"9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4\" (UID: \"c81f0fb7-1778-4ff0-9a01-a10290da0be6\") " pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" Jan 27 07:08:13 crc kubenswrapper[4872]: I0127 07:08:13.406408 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm7mh\" (UniqueName: \"kubernetes.io/projected/c81f0fb7-1778-4ff0-9a01-a10290da0be6-kube-api-access-tm7mh\") pod \"9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4\" (UID: \"c81f0fb7-1778-4ff0-9a01-a10290da0be6\") " pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" Jan 27 07:08:13 crc kubenswrapper[4872]: I0127 07:08:13.598467 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" Jan 27 07:08:14 crc kubenswrapper[4872]: I0127 07:08:14.011552 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4"] Jan 27 07:08:14 crc kubenswrapper[4872]: I0127 07:08:14.215601 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" event={"ID":"c81f0fb7-1778-4ff0-9a01-a10290da0be6","Type":"ContainerStarted","Data":"b270769b3d258ad3fc670d5e4c55b9574a204388632fe8ccf302a8cd34359efb"} Jan 27 07:08:14 crc kubenswrapper[4872]: I0127 07:08:14.216100 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" event={"ID":"c81f0fb7-1778-4ff0-9a01-a10290da0be6","Type":"ContainerStarted","Data":"843fe5c8e13287cd771f152e0d0e428f52269c51946467752ff8a5667338021b"} Jan 27 07:08:15 crc kubenswrapper[4872]: I0127 07:08:15.227820 4872 generic.go:334] "Generic (PLEG): container finished" podID="c81f0fb7-1778-4ff0-9a01-a10290da0be6" containerID="b270769b3d258ad3fc670d5e4c55b9574a204388632fe8ccf302a8cd34359efb" exitCode=0 Jan 27 07:08:15 crc kubenswrapper[4872]: I0127 07:08:15.227889 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" event={"ID":"c81f0fb7-1778-4ff0-9a01-a10290da0be6","Type":"ContainerDied","Data":"b270769b3d258ad3fc670d5e4c55b9574a204388632fe8ccf302a8cd34359efb"} Jan 27 07:08:16 crc kubenswrapper[4872]: I0127 07:08:16.238534 4872 generic.go:334] "Generic (PLEG): container finished" podID="c81f0fb7-1778-4ff0-9a01-a10290da0be6" containerID="c7aa90d2b30620e2d085788afa9b5c6393ed2b5634c2b109abed897ef70b5114" exitCode=0 Jan 27 07:08:16 crc kubenswrapper[4872]: I0127 07:08:16.238578 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" event={"ID":"c81f0fb7-1778-4ff0-9a01-a10290da0be6","Type":"ContainerDied","Data":"c7aa90d2b30620e2d085788afa9b5c6393ed2b5634c2b109abed897ef70b5114"} Jan 27 07:08:17 crc kubenswrapper[4872]: I0127 07:08:17.255706 4872 generic.go:334] "Generic (PLEG): container finished" podID="c81f0fb7-1778-4ff0-9a01-a10290da0be6" containerID="51f516b710766a83615ab97384bfb6f674848bd6d938d7ba5b1b9ceed23f31a7" exitCode=0 Jan 27 07:08:17 crc kubenswrapper[4872]: I0127 07:08:17.255759 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" event={"ID":"c81f0fb7-1778-4ff0-9a01-a10290da0be6","Type":"ContainerDied","Data":"51f516b710766a83615ab97384bfb6f674848bd6d938d7ba5b1b9ceed23f31a7"} Jan 27 07:08:18 crc kubenswrapper[4872]: I0127 07:08:18.544222 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" Jan 27 07:08:18 crc kubenswrapper[4872]: I0127 07:08:18.655023 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm7mh\" (UniqueName: \"kubernetes.io/projected/c81f0fb7-1778-4ff0-9a01-a10290da0be6-kube-api-access-tm7mh\") pod \"c81f0fb7-1778-4ff0-9a01-a10290da0be6\" (UID: \"c81f0fb7-1778-4ff0-9a01-a10290da0be6\") " Jan 27 07:08:18 crc kubenswrapper[4872]: I0127 07:08:18.655109 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c81f0fb7-1778-4ff0-9a01-a10290da0be6-bundle\") pod \"c81f0fb7-1778-4ff0-9a01-a10290da0be6\" (UID: \"c81f0fb7-1778-4ff0-9a01-a10290da0be6\") " Jan 27 07:08:18 crc kubenswrapper[4872]: I0127 07:08:18.655132 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c81f0fb7-1778-4ff0-9a01-a10290da0be6-util\") pod \"c81f0fb7-1778-4ff0-9a01-a10290da0be6\" (UID: \"c81f0fb7-1778-4ff0-9a01-a10290da0be6\") " Jan 27 07:08:18 crc kubenswrapper[4872]: I0127 07:08:18.656676 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c81f0fb7-1778-4ff0-9a01-a10290da0be6-bundle" (OuterVolumeSpecName: "bundle") pod "c81f0fb7-1778-4ff0-9a01-a10290da0be6" (UID: "c81f0fb7-1778-4ff0-9a01-a10290da0be6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:08:18 crc kubenswrapper[4872]: I0127 07:08:18.671182 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c81f0fb7-1778-4ff0-9a01-a10290da0be6-kube-api-access-tm7mh" (OuterVolumeSpecName: "kube-api-access-tm7mh") pod "c81f0fb7-1778-4ff0-9a01-a10290da0be6" (UID: "c81f0fb7-1778-4ff0-9a01-a10290da0be6"). InnerVolumeSpecName "kube-api-access-tm7mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:08:18 crc kubenswrapper[4872]: I0127 07:08:18.676701 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c81f0fb7-1778-4ff0-9a01-a10290da0be6-util" (OuterVolumeSpecName: "util") pod "c81f0fb7-1778-4ff0-9a01-a10290da0be6" (UID: "c81f0fb7-1778-4ff0-9a01-a10290da0be6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:08:18 crc kubenswrapper[4872]: I0127 07:08:18.756901 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm7mh\" (UniqueName: \"kubernetes.io/projected/c81f0fb7-1778-4ff0-9a01-a10290da0be6-kube-api-access-tm7mh\") on node \"crc\" DevicePath \"\"" Jan 27 07:08:18 crc kubenswrapper[4872]: I0127 07:08:18.756944 4872 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c81f0fb7-1778-4ff0-9a01-a10290da0be6-util\") on node \"crc\" DevicePath \"\"" Jan 27 07:08:18 crc kubenswrapper[4872]: I0127 07:08:18.756954 4872 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c81f0fb7-1778-4ff0-9a01-a10290da0be6-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:08:19 crc kubenswrapper[4872]: I0127 07:08:19.288743 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" event={"ID":"c81f0fb7-1778-4ff0-9a01-a10290da0be6","Type":"ContainerDied","Data":"843fe5c8e13287cd771f152e0d0e428f52269c51946467752ff8a5667338021b"} Jan 27 07:08:19 crc kubenswrapper[4872]: I0127 07:08:19.288802 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="843fe5c8e13287cd771f152e0d0e428f52269c51946467752ff8a5667338021b" Jan 27 07:08:19 crc kubenswrapper[4872]: I0127 07:08:19.288873 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4" Jan 27 07:08:25 crc kubenswrapper[4872]: I0127 07:08:25.000954 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:08:25 crc kubenswrapper[4872]: I0127 07:08:25.001290 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:08:25 crc kubenswrapper[4872]: I0127 07:08:25.314907 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5d9c9b4d7f-w29fp"] Jan 27 07:08:25 crc kubenswrapper[4872]: E0127 07:08:25.315122 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81f0fb7-1778-4ff0-9a01-a10290da0be6" containerName="extract" Jan 27 07:08:25 crc kubenswrapper[4872]: I0127 07:08:25.315134 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81f0fb7-1778-4ff0-9a01-a10290da0be6" containerName="extract" Jan 27 07:08:25 crc kubenswrapper[4872]: E0127 07:08:25.315148 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81f0fb7-1778-4ff0-9a01-a10290da0be6" containerName="util" Jan 27 07:08:25 crc kubenswrapper[4872]: I0127 07:08:25.315154 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81f0fb7-1778-4ff0-9a01-a10290da0be6" containerName="util" Jan 27 07:08:25 crc kubenswrapper[4872]: E0127 07:08:25.315169 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81f0fb7-1778-4ff0-9a01-a10290da0be6" containerName="pull" Jan 27 07:08:25 crc kubenswrapper[4872]: I0127 07:08:25.315176 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81f0fb7-1778-4ff0-9a01-a10290da0be6" containerName="pull" Jan 27 07:08:25 crc kubenswrapper[4872]: I0127 07:08:25.315280 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81f0fb7-1778-4ff0-9a01-a10290da0be6" containerName="extract" Jan 27 07:08:25 crc kubenswrapper[4872]: I0127 07:08:25.315636 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5d9c9b4d7f-w29fp" Jan 27 07:08:25 crc kubenswrapper[4872]: I0127 07:08:25.317750 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-qgjcq" Jan 27 07:08:25 crc kubenswrapper[4872]: I0127 07:08:25.346671 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5d9c9b4d7f-w29fp"] Jan 27 07:08:25 crc kubenswrapper[4872]: I0127 07:08:25.446853 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7szv\" (UniqueName: \"kubernetes.io/projected/05798121-4935-462c-96be-fb6c33c72471-kube-api-access-l7szv\") pod \"openstack-operator-controller-init-5d9c9b4d7f-w29fp\" (UID: \"05798121-4935-462c-96be-fb6c33c72471\") " pod="openstack-operators/openstack-operator-controller-init-5d9c9b4d7f-w29fp" Jan 27 07:08:25 crc kubenswrapper[4872]: I0127 07:08:25.548448 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7szv\" (UniqueName: \"kubernetes.io/projected/05798121-4935-462c-96be-fb6c33c72471-kube-api-access-l7szv\") pod \"openstack-operator-controller-init-5d9c9b4d7f-w29fp\" (UID: \"05798121-4935-462c-96be-fb6c33c72471\") " pod="openstack-operators/openstack-operator-controller-init-5d9c9b4d7f-w29fp" Jan 27 07:08:25 crc kubenswrapper[4872]: I0127 07:08:25.604902 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7szv\" (UniqueName: \"kubernetes.io/projected/05798121-4935-462c-96be-fb6c33c72471-kube-api-access-l7szv\") pod \"openstack-operator-controller-init-5d9c9b4d7f-w29fp\" (UID: \"05798121-4935-462c-96be-fb6c33c72471\") " pod="openstack-operators/openstack-operator-controller-init-5d9c9b4d7f-w29fp" Jan 27 07:08:25 crc kubenswrapper[4872]: I0127 07:08:25.632399 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5d9c9b4d7f-w29fp" Jan 27 07:08:26 crc kubenswrapper[4872]: I0127 07:08:26.065201 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5d9c9b4d7f-w29fp"] Jan 27 07:08:26 crc kubenswrapper[4872]: I0127 07:08:26.333132 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5d9c9b4d7f-w29fp" event={"ID":"05798121-4935-462c-96be-fb6c33c72471","Type":"ContainerStarted","Data":"9ebf7beb078df0f23939ac1f184f06b4267e693c86400b09f808af7f5605639c"} Jan 27 07:08:30 crc kubenswrapper[4872]: I0127 07:08:30.365578 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5d9c9b4d7f-w29fp" event={"ID":"05798121-4935-462c-96be-fb6c33c72471","Type":"ContainerStarted","Data":"2127c788578cbe0513f79aa8b0a7d8103af0a0c7608882fe96cfbe6c3343b07e"} Jan 27 07:08:30 crc kubenswrapper[4872]: I0127 07:08:30.366122 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5d9c9b4d7f-w29fp" Jan 27 07:08:30 crc kubenswrapper[4872]: I0127 07:08:30.395781 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5d9c9b4d7f-w29fp" podStartSLOduration=1.396103857 podStartE2EDuration="5.395763197s" podCreationTimestamp="2026-01-27 07:08:25 +0000 UTC" firstStartedPulling="2026-01-27 07:08:26.071978786 +0000 UTC m=+882.599453982" lastFinishedPulling="2026-01-27 07:08:30.071638136 +0000 UTC m=+886.599113322" observedRunningTime="2026-01-27 07:08:30.391575973 +0000 UTC m=+886.919051189" watchObservedRunningTime="2026-01-27 07:08:30.395763197 +0000 UTC m=+886.923238393" Jan 27 07:08:35 crc kubenswrapper[4872]: I0127 07:08:35.635761 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5d9c9b4d7f-w29fp" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.010000 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-679885cc9c-4ks2c"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.011176 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-679885cc9c-4ks2c" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.014197 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-45ngb" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.025191 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-679885cc9c-4ks2c"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.032673 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-mvrf5"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.033471 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-mvrf5" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.034995 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-mczsn" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.053213 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-4l5gt"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.054149 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4l5gt" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.057948 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-7phwd" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.058590 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-mvrf5"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.069147 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-4l5gt"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.074933 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-cqrlm"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.077956 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cqrlm" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.086271 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-qxz8j" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.108908 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-cqrlm"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.111456 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-74866cc64d-w4rmt"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.114976 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-w4rmt" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.119611 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-47kb4" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.128464 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zsfv\" (UniqueName: \"kubernetes.io/projected/b73bd36b-309b-4692-94d3-80de8061ee1c-kube-api-access-7zsfv\") pod \"barbican-operator-controller-manager-679885cc9c-4ks2c\" (UID: \"b73bd36b-309b-4692-94d3-80de8061ee1c\") " pod="openstack-operators/barbican-operator-controller-manager-679885cc9c-4ks2c" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.128512 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc8bv\" (UniqueName: \"kubernetes.io/projected/08fccf6d-9f03-4348-968b-8a0db7e49d41-kube-api-access-cc8bv\") pod \"cinder-operator-controller-manager-655bf9cfbb-mvrf5\" (UID: \"08fccf6d-9f03-4348-968b-8a0db7e49d41\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-mvrf5" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.128554 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgnd2\" (UniqueName: \"kubernetes.io/projected/40a630f8-5dec-444b-aca1-73e4b2b76e40-kube-api-access-zgnd2\") pod \"designate-operator-controller-manager-77554cdc5c-4l5gt\" (UID: \"40a630f8-5dec-444b-aca1-73e4b2b76e40\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4l5gt" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.159897 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-74866cc64d-w4rmt"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.172364 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhgwq"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.174319 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhgwq" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.179727 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-xvsvr" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.179875 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhgwq"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.217354 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.218113 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.223153 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.223364 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-lsg5k" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.229172 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc8bv\" (UniqueName: \"kubernetes.io/projected/08fccf6d-9f03-4348-968b-8a0db7e49d41-kube-api-access-cc8bv\") pod \"cinder-operator-controller-manager-655bf9cfbb-mvrf5\" (UID: \"08fccf6d-9f03-4348-968b-8a0db7e49d41\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-mvrf5" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.229219 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgnd2\" (UniqueName: \"kubernetes.io/projected/40a630f8-5dec-444b-aca1-73e4b2b76e40-kube-api-access-zgnd2\") pod \"designate-operator-controller-manager-77554cdc5c-4l5gt\" (UID: \"40a630f8-5dec-444b-aca1-73e4b2b76e40\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4l5gt" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.229296 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brqh2\" (UniqueName: \"kubernetes.io/projected/021fb455-7c8a-4fac-8c26-2a24c605f4e0-kube-api-access-brqh2\") pod \"heat-operator-controller-manager-74866cc64d-w4rmt\" (UID: \"021fb455-7c8a-4fac-8c26-2a24c605f4e0\") " pod="openstack-operators/heat-operator-controller-manager-74866cc64d-w4rmt" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.229346 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zsfv\" (UniqueName: \"kubernetes.io/projected/b73bd36b-309b-4692-94d3-80de8061ee1c-kube-api-access-7zsfv\") pod \"barbican-operator-controller-manager-679885cc9c-4ks2c\" (UID: \"b73bd36b-309b-4692-94d3-80de8061ee1c\") " pod="openstack-operators/barbican-operator-controller-manager-679885cc9c-4ks2c" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.229389 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4v7v\" (UniqueName: \"kubernetes.io/projected/0d14310b-b74d-4d74-9c25-32dc8ac6f8c3-kube-api-access-b4v7v\") pod \"glance-operator-controller-manager-67dd55ff59-cqrlm\" (UID: \"0d14310b-b74d-4d74-9c25-32dc8ac6f8c3\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cqrlm" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.236101 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-s28bs"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.236930 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-s28bs" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.243531 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-bb24v" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.243779 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-s28bs"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.261500 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.294036 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zsfv\" (UniqueName: \"kubernetes.io/projected/b73bd36b-309b-4692-94d3-80de8061ee1c-kube-api-access-7zsfv\") pod \"barbican-operator-controller-manager-679885cc9c-4ks2c\" (UID: \"b73bd36b-309b-4692-94d3-80de8061ee1c\") " pod="openstack-operators/barbican-operator-controller-manager-679885cc9c-4ks2c" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.294097 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-5qbml"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.294834 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-5qbml" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.301963 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-lfbb5" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.305137 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgnd2\" (UniqueName: \"kubernetes.io/projected/40a630f8-5dec-444b-aca1-73e4b2b76e40-kube-api-access-zgnd2\") pod \"designate-operator-controller-manager-77554cdc5c-4l5gt\" (UID: \"40a630f8-5dec-444b-aca1-73e4b2b76e40\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4l5gt" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.324156 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc8bv\" (UniqueName: \"kubernetes.io/projected/08fccf6d-9f03-4348-968b-8a0db7e49d41-kube-api-access-cc8bv\") pod \"cinder-operator-controller-manager-655bf9cfbb-mvrf5\" (UID: \"08fccf6d-9f03-4348-968b-8a0db7e49d41\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-mvrf5" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.330491 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhb24\" (UniqueName: \"kubernetes.io/projected/2e29beed-1f67-4af9-bd04-26453b885725-kube-api-access-xhb24\") pod \"infra-operator-controller-manager-7d75bc88d5-w76sk\" (UID: \"2e29beed-1f67-4af9-bd04-26453b885725\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.330704 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-w76sk\" (UID: \"2e29beed-1f67-4af9-bd04-26453b885725\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.330807 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brqh2\" (UniqueName: \"kubernetes.io/projected/021fb455-7c8a-4fac-8c26-2a24c605f4e0-kube-api-access-brqh2\") pod \"heat-operator-controller-manager-74866cc64d-w4rmt\" (UID: \"021fb455-7c8a-4fac-8c26-2a24c605f4e0\") " pod="openstack-operators/heat-operator-controller-manager-74866cc64d-w4rmt" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.330907 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mffj7\" (UniqueName: \"kubernetes.io/projected/0e770853-8cff-46d8-88a6-4b879002bb47-kube-api-access-mffj7\") pod \"ironic-operator-controller-manager-768b776ffb-s28bs\" (UID: \"0e770853-8cff-46d8-88a6-4b879002bb47\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-s28bs" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.330993 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r45b6\" (UniqueName: \"kubernetes.io/projected/4c623fe5-f07e-41cf-be56-97a412385609-kube-api-access-r45b6\") pod \"horizon-operator-controller-manager-77d5c5b54f-vhgwq\" (UID: \"4c623fe5-f07e-41cf-be56-97a412385609\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhgwq" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.331078 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4v7v\" (UniqueName: \"kubernetes.io/projected/0d14310b-b74d-4d74-9c25-32dc8ac6f8c3-kube-api-access-b4v7v\") pod \"glance-operator-controller-manager-67dd55ff59-cqrlm\" (UID: \"0d14310b-b74d-4d74-9c25-32dc8ac6f8c3\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cqrlm" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.333194 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-679885cc9c-4ks2c" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.350079 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-5qbml"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.355388 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-mvrf5" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.358074 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-r8hvq"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.390166 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.395905 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4l5gt" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.397655 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brqh2\" (UniqueName: \"kubernetes.io/projected/021fb455-7c8a-4fac-8c26-2a24c605f4e0-kube-api-access-brqh2\") pod \"heat-operator-controller-manager-74866cc64d-w4rmt\" (UID: \"021fb455-7c8a-4fac-8c26-2a24c605f4e0\") " pod="openstack-operators/heat-operator-controller-manager-74866cc64d-w4rmt" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.398914 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4v7v\" (UniqueName: \"kubernetes.io/projected/0d14310b-b74d-4d74-9c25-32dc8ac6f8c3-kube-api-access-b4v7v\") pod \"glance-operator-controller-manager-67dd55ff59-cqrlm\" (UID: \"0d14310b-b74d-4d74-9c25-32dc8ac6f8c3\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cqrlm" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.400303 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-r8hvq" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.416145 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-znhq5"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.418812 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-znhq5" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.419142 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.419947 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cqrlm" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.431632 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-g9txh" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.432232 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kxmh\" (UniqueName: \"kubernetes.io/projected/dc917795-9a7c-4904-97f6-504995bb037f-kube-api-access-2kxmh\") pod \"manila-operator-controller-manager-849fcfbb6b-r8hvq\" (UID: \"dc917795-9a7c-4904-97f6-504995bb037f\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-r8hvq" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.432316 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mffj7\" (UniqueName: \"kubernetes.io/projected/0e770853-8cff-46d8-88a6-4b879002bb47-kube-api-access-mffj7\") pod \"ironic-operator-controller-manager-768b776ffb-s28bs\" (UID: \"0e770853-8cff-46d8-88a6-4b879002bb47\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-s28bs" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.432391 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r45b6\" (UniqueName: \"kubernetes.io/projected/4c623fe5-f07e-41cf-be56-97a412385609-kube-api-access-r45b6\") pod \"horizon-operator-controller-manager-77d5c5b54f-vhgwq\" (UID: \"4c623fe5-f07e-41cf-be56-97a412385609\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhgwq" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.432432 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prjt7\" (UniqueName: \"kubernetes.io/projected/37ddb430-0905-4430-990c-1a9122983760-kube-api-access-prjt7\") pod \"keystone-operator-controller-manager-55f684fd56-5qbml\" (UID: \"37ddb430-0905-4430-990c-1a9122983760\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-5qbml" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.432479 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhb24\" (UniqueName: \"kubernetes.io/projected/2e29beed-1f67-4af9-bd04-26453b885725-kube-api-access-xhb24\") pod \"infra-operator-controller-manager-7d75bc88d5-w76sk\" (UID: \"2e29beed-1f67-4af9-bd04-26453b885725\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.432509 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-w76sk\" (UID: \"2e29beed-1f67-4af9-bd04-26453b885725\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" Jan 27 07:08:53 crc kubenswrapper[4872]: E0127 07:08:53.432646 4872 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:08:53 crc kubenswrapper[4872]: E0127 07:08:53.432696 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert podName:2e29beed-1f67-4af9-bd04-26453b885725 nodeName:}" failed. No retries permitted until 2026-01-27 07:08:53.93267648 +0000 UTC m=+910.460151676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert") pod "infra-operator-controller-manager-7d75bc88d5-w76sk" (UID: "2e29beed-1f67-4af9-bd04-26453b885725") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.441232 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-w4rmt" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.443741 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-cdfzj" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.443776 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-96cjx" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.474048 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mffj7\" (UniqueName: \"kubernetes.io/projected/0e770853-8cff-46d8-88a6-4b879002bb47-kube-api-access-mffj7\") pod \"ironic-operator-controller-manager-768b776ffb-s28bs\" (UID: \"0e770853-8cff-46d8-88a6-4b879002bb47\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-s28bs" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.479720 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r45b6\" (UniqueName: \"kubernetes.io/projected/4c623fe5-f07e-41cf-be56-97a412385609-kube-api-access-r45b6\") pod \"horizon-operator-controller-manager-77d5c5b54f-vhgwq\" (UID: \"4c623fe5-f07e-41cf-be56-97a412385609\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhgwq" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.521695 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhgwq" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.525882 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhb24\" (UniqueName: \"kubernetes.io/projected/2e29beed-1f67-4af9-bd04-26453b885725-kube-api-access-xhb24\") pod \"infra-operator-controller-manager-7d75bc88d5-w76sk\" (UID: \"2e29beed-1f67-4af9-bd04-26453b885725\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.526766 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-r8hvq"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.544475 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prjt7\" (UniqueName: \"kubernetes.io/projected/37ddb430-0905-4430-990c-1a9122983760-kube-api-access-prjt7\") pod \"keystone-operator-controller-manager-55f684fd56-5qbml\" (UID: \"37ddb430-0905-4430-990c-1a9122983760\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-5qbml" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.544527 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjvvw\" (UniqueName: \"kubernetes.io/projected/d6e3600b-d99a-47f3-abe9-e130e507eba6-kube-api-access-xjvvw\") pod \"neutron-operator-controller-manager-7ffd8d76d4-znhq5\" (UID: \"d6e3600b-d99a-47f3-abe9-e130e507eba6\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-znhq5" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.544596 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bhlz\" (UniqueName: \"kubernetes.io/projected/7171bbfc-1ce9-4f32-9005-64f4b355756a-kube-api-access-9bhlz\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4\" (UID: \"7171bbfc-1ce9-4f32-9005-64f4b355756a\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.544625 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kxmh\" (UniqueName: \"kubernetes.io/projected/dc917795-9a7c-4904-97f6-504995bb037f-kube-api-access-2kxmh\") pod \"manila-operator-controller-manager-849fcfbb6b-r8hvq\" (UID: \"dc917795-9a7c-4904-97f6-504995bb037f\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-r8hvq" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.562887 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-znhq5"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.572307 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prjt7\" (UniqueName: \"kubernetes.io/projected/37ddb430-0905-4430-990c-1a9122983760-kube-api-access-prjt7\") pod \"keystone-operator-controller-manager-55f684fd56-5qbml\" (UID: \"37ddb430-0905-4430-990c-1a9122983760\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-5qbml" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.578860 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kxmh\" (UniqueName: \"kubernetes.io/projected/dc917795-9a7c-4904-97f6-504995bb037f-kube-api-access-2kxmh\") pod \"manila-operator-controller-manager-849fcfbb6b-r8hvq\" (UID: \"dc917795-9a7c-4904-97f6-504995bb037f\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-r8hvq" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.578915 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.608878 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-s28bs" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.636185 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-ztfzb"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.637445 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-ztfzb" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.657743 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjvvw\" (UniqueName: \"kubernetes.io/projected/d6e3600b-d99a-47f3-abe9-e130e507eba6-kube-api-access-xjvvw\") pod \"neutron-operator-controller-manager-7ffd8d76d4-znhq5\" (UID: \"d6e3600b-d99a-47f3-abe9-e130e507eba6\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-znhq5" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.657830 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bhlz\" (UniqueName: \"kubernetes.io/projected/7171bbfc-1ce9-4f32-9005-64f4b355756a-kube-api-access-9bhlz\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4\" (UID: \"7171bbfc-1ce9-4f32-9005-64f4b355756a\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.658015 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-km8zh" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.667964 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-ztfzb"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.668232 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-5qbml" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.686485 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-q2bpc"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.692650 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-q2bpc" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.694439 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ttr7q" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.706444 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjvvw\" (UniqueName: \"kubernetes.io/projected/d6e3600b-d99a-47f3-abe9-e130e507eba6-kube-api-access-xjvvw\") pod \"neutron-operator-controller-manager-7ffd8d76d4-znhq5\" (UID: \"d6e3600b-d99a-47f3-abe9-e130e507eba6\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-znhq5" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.709098 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-q2bpc"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.715190 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bhlz\" (UniqueName: \"kubernetes.io/projected/7171bbfc-1ce9-4f32-9005-64f4b355756a-kube-api-access-9bhlz\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4\" (UID: \"7171bbfc-1ce9-4f32-9005-64f4b355756a\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.735780 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-jdjg2"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.736544 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jdjg2" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.743306 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.743455 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-ncjqp" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.744187 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.749393 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-jdjg2"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.754752 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zw426" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.754942 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.757253 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.759834 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2x7x\" (UniqueName: \"kubernetes.io/projected/a6541aa0-da81-49ed-be52-db0df5f2199f-kube-api-access-m2x7x\") pod \"nova-operator-controller-manager-7f54b7d6d4-ztfzb\" (UID: \"a6541aa0-da81-49ed-be52-db0df5f2199f\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-ztfzb" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.760053 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zp24\" (UniqueName: \"kubernetes.io/projected/ee054c5f-dff0-4c0f-84e1-1aca8b17b9e9-kube-api-access-4zp24\") pod \"octavia-operator-controller-manager-7875d7675-q2bpc\" (UID: \"ee054c5f-dff0-4c0f-84e1-1aca8b17b9e9\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-q2bpc" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.775076 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-5pnr4"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.776043 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5pnr4" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.779507 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-f4ctx" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.782863 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-r8hvq" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.783281 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-5pnr4"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.790853 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-db4rp"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.791690 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-db4rp" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.795284 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-pfs45" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.813014 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-7zrdn"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.814076 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-7zrdn" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.816028 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-9t2s4" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.823426 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-db4rp"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.829370 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-qkvf8"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.830781 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qkvf8" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.833630 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-jbw79" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.843168 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-znhq5" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.850398 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-7zrdn"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.862023 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jq6w\" (UniqueName: \"kubernetes.io/projected/1123a944-c0b8-4b09-a177-bdb16206d1fc-kube-api-access-9jq6w\") pod \"swift-operator-controller-manager-547cbdb99f-db4rp\" (UID: \"1123a944-c0b8-4b09-a177-bdb16206d1fc\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-db4rp" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.862073 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2x7x\" (UniqueName: \"kubernetes.io/projected/a6541aa0-da81-49ed-be52-db0df5f2199f-kube-api-access-m2x7x\") pod \"nova-operator-controller-manager-7f54b7d6d4-ztfzb\" (UID: \"a6541aa0-da81-49ed-be52-db0df5f2199f\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-ztfzb" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.862097 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx7l7\" (UniqueName: \"kubernetes.io/projected/91a0860c-c480-41da-8adb-b3c902e61b32-kube-api-access-gx7l7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854tk59h\" (UID: \"91a0860c-c480-41da-8adb-b3c902e61b32\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.862115 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj28z\" (UniqueName: \"kubernetes.io/projected/5bb731bb-f2e5-4022-b291-40158a3e9195-kube-api-access-dj28z\") pod \"placement-operator-controller-manager-79d5ccc684-5pnr4\" (UID: \"5bb731bb-f2e5-4022-b291-40158a3e9195\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5pnr4" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.862136 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n8dx\" (UniqueName: \"kubernetes.io/projected/ab06f84a-a334-4a56-87da-f9ffe0026051-kube-api-access-7n8dx\") pod \"test-operator-controller-manager-69797bbcbd-qkvf8\" (UID: \"ab06f84a-a334-4a56-87da-f9ffe0026051\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qkvf8" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.862158 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854tk59h\" (UID: \"91a0860c-c480-41da-8adb-b3c902e61b32\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.862222 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsskk\" (UniqueName: \"kubernetes.io/projected/07268bac-40d7-4b33-be2b-51b3664ef473-kube-api-access-xsskk\") pod \"telemetry-operator-controller-manager-799bc87c89-7zrdn\" (UID: \"07268bac-40d7-4b33-be2b-51b3664ef473\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-7zrdn" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.862243 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zp24\" (UniqueName: \"kubernetes.io/projected/ee054c5f-dff0-4c0f-84e1-1aca8b17b9e9-kube-api-access-4zp24\") pod \"octavia-operator-controller-manager-7875d7675-q2bpc\" (UID: \"ee054c5f-dff0-4c0f-84e1-1aca8b17b9e9\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-q2bpc" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.862279 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7579\" (UniqueName: \"kubernetes.io/projected/5d51e7b0-0af8-4529-8669-208e071a494a-kube-api-access-p7579\") pod \"ovn-operator-controller-manager-6f75f45d54-jdjg2\" (UID: \"5d51e7b0-0af8-4529-8669-208e071a494a\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jdjg2" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.862604 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-qkvf8"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.867901 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.868439 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75db85654f-v4fmr"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.869522 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-v4fmr" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.871859 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-mj29j" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.884449 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75db85654f-v4fmr"] Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.908346 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zp24\" (UniqueName: \"kubernetes.io/projected/ee054c5f-dff0-4c0f-84e1-1aca8b17b9e9-kube-api-access-4zp24\") pod \"octavia-operator-controller-manager-7875d7675-q2bpc\" (UID: \"ee054c5f-dff0-4c0f-84e1-1aca8b17b9e9\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-q2bpc" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.923330 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2x7x\" (UniqueName: \"kubernetes.io/projected/a6541aa0-da81-49ed-be52-db0df5f2199f-kube-api-access-m2x7x\") pod \"nova-operator-controller-manager-7f54b7d6d4-ztfzb\" (UID: \"a6541aa0-da81-49ed-be52-db0df5f2199f\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-ztfzb" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.963664 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jq6w\" (UniqueName: \"kubernetes.io/projected/1123a944-c0b8-4b09-a177-bdb16206d1fc-kube-api-access-9jq6w\") pod \"swift-operator-controller-manager-547cbdb99f-db4rp\" (UID: \"1123a944-c0b8-4b09-a177-bdb16206d1fc\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-db4rp" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.963970 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx7l7\" (UniqueName: \"kubernetes.io/projected/91a0860c-c480-41da-8adb-b3c902e61b32-kube-api-access-gx7l7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854tk59h\" (UID: \"91a0860c-c480-41da-8adb-b3c902e61b32\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.963988 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj28z\" (UniqueName: \"kubernetes.io/projected/5bb731bb-f2e5-4022-b291-40158a3e9195-kube-api-access-dj28z\") pod \"placement-operator-controller-manager-79d5ccc684-5pnr4\" (UID: \"5bb731bb-f2e5-4022-b291-40158a3e9195\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5pnr4" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.964024 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n8dx\" (UniqueName: \"kubernetes.io/projected/ab06f84a-a334-4a56-87da-f9ffe0026051-kube-api-access-7n8dx\") pod \"test-operator-controller-manager-69797bbcbd-qkvf8\" (UID: \"ab06f84a-a334-4a56-87da-f9ffe0026051\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qkvf8" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.964046 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854tk59h\" (UID: \"91a0860c-c480-41da-8adb-b3c902e61b32\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.964090 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-w76sk\" (UID: \"2e29beed-1f67-4af9-bd04-26453b885725\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.964118 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjx59\" (UniqueName: \"kubernetes.io/projected/5a0744a4-d705-4fce-9d51-0646103fd458-kube-api-access-tjx59\") pod \"watcher-operator-controller-manager-75db85654f-v4fmr\" (UID: \"5a0744a4-d705-4fce-9d51-0646103fd458\") " pod="openstack-operators/watcher-operator-controller-manager-75db85654f-v4fmr" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.964148 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsskk\" (UniqueName: \"kubernetes.io/projected/07268bac-40d7-4b33-be2b-51b3664ef473-kube-api-access-xsskk\") pod \"telemetry-operator-controller-manager-799bc87c89-7zrdn\" (UID: \"07268bac-40d7-4b33-be2b-51b3664ef473\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-7zrdn" Jan 27 07:08:53 crc kubenswrapper[4872]: I0127 07:08:53.964178 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7579\" (UniqueName: \"kubernetes.io/projected/5d51e7b0-0af8-4529-8669-208e071a494a-kube-api-access-p7579\") pod \"ovn-operator-controller-manager-6f75f45d54-jdjg2\" (UID: \"5d51e7b0-0af8-4529-8669-208e071a494a\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jdjg2" Jan 27 07:08:53 crc kubenswrapper[4872]: E0127 07:08:53.964505 4872 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:08:53 crc kubenswrapper[4872]: E0127 07:08:53.964551 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert podName:91a0860c-c480-41da-8adb-b3c902e61b32 nodeName:}" failed. No retries permitted until 2026-01-27 07:08:54.464538561 +0000 UTC m=+910.992013767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" (UID: "91a0860c-c480-41da-8adb-b3c902e61b32") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:08:53 crc kubenswrapper[4872]: E0127 07:08:53.964593 4872 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:08:53 crc kubenswrapper[4872]: E0127 07:08:53.964612 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert podName:2e29beed-1f67-4af9-bd04-26453b885725 nodeName:}" failed. No retries permitted until 2026-01-27 07:08:54.964605623 +0000 UTC m=+911.492080819 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert") pod "infra-operator-controller-manager-7d75bc88d5-w76sk" (UID: "2e29beed-1f67-4af9-bd04-26453b885725") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.004123 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-ztfzb" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.029482 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsskk\" (UniqueName: \"kubernetes.io/projected/07268bac-40d7-4b33-be2b-51b3664ef473-kube-api-access-xsskk\") pod \"telemetry-operator-controller-manager-799bc87c89-7zrdn\" (UID: \"07268bac-40d7-4b33-be2b-51b3664ef473\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-7zrdn" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.052160 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj28z\" (UniqueName: \"kubernetes.io/projected/5bb731bb-f2e5-4022-b291-40158a3e9195-kube-api-access-dj28z\") pod \"placement-operator-controller-manager-79d5ccc684-5pnr4\" (UID: \"5bb731bb-f2e5-4022-b291-40158a3e9195\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5pnr4" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.052832 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7579\" (UniqueName: \"kubernetes.io/projected/5d51e7b0-0af8-4529-8669-208e071a494a-kube-api-access-p7579\") pod \"ovn-operator-controller-manager-6f75f45d54-jdjg2\" (UID: \"5d51e7b0-0af8-4529-8669-208e071a494a\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jdjg2" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.053629 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n8dx\" (UniqueName: \"kubernetes.io/projected/ab06f84a-a334-4a56-87da-f9ffe0026051-kube-api-access-7n8dx\") pod \"test-operator-controller-manager-69797bbcbd-qkvf8\" (UID: \"ab06f84a-a334-4a56-87da-f9ffe0026051\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qkvf8" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.054658 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jq6w\" (UniqueName: \"kubernetes.io/projected/1123a944-c0b8-4b09-a177-bdb16206d1fc-kube-api-access-9jq6w\") pod \"swift-operator-controller-manager-547cbdb99f-db4rp\" (UID: \"1123a944-c0b8-4b09-a177-bdb16206d1fc\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-db4rp" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.056812 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx7l7\" (UniqueName: \"kubernetes.io/projected/91a0860c-c480-41da-8adb-b3c902e61b32-kube-api-access-gx7l7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854tk59h\" (UID: \"91a0860c-c480-41da-8adb-b3c902e61b32\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.065104 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjx59\" (UniqueName: \"kubernetes.io/projected/5a0744a4-d705-4fce-9d51-0646103fd458-kube-api-access-tjx59\") pod \"watcher-operator-controller-manager-75db85654f-v4fmr\" (UID: \"5a0744a4-d705-4fce-9d51-0646103fd458\") " pod="openstack-operators/watcher-operator-controller-manager-75db85654f-v4fmr" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.092805 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r"] Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.093702 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.102596 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.102985 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-spd9z" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.103096 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.105821 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjx59\" (UniqueName: \"kubernetes.io/projected/5a0744a4-d705-4fce-9d51-0646103fd458-kube-api-access-tjx59\") pod \"watcher-operator-controller-manager-75db85654f-v4fmr\" (UID: \"5a0744a4-d705-4fce-9d51-0646103fd458\") " pod="openstack-operators/watcher-operator-controller-manager-75db85654f-v4fmr" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.156453 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-q2bpc" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.159260 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jdjg2" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.171830 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8hgb\" (UniqueName: \"kubernetes.io/projected/357cea45-02a1-4431-b6b2-098316ed0c41-kube-api-access-b8hgb\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.171914 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.172066 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.221414 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5pnr4" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.230614 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-db4rp" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.247439 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-7zrdn" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.274296 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8hgb\" (UniqueName: \"kubernetes.io/projected/357cea45-02a1-4431-b6b2-098316ed0c41-kube-api-access-b8hgb\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.274343 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.274445 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:08:54 crc kubenswrapper[4872]: E0127 07:08:54.274659 4872 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:08:54 crc kubenswrapper[4872]: E0127 07:08:54.274710 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs podName:357cea45-02a1-4431-b6b2-098316ed0c41 nodeName:}" failed. No retries permitted until 2026-01-27 07:08:54.774695423 +0000 UTC m=+911.302170619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs") pod "openstack-operator-controller-manager-58b6ccd9d-xft2r" (UID: "357cea45-02a1-4431-b6b2-098316ed0c41") : secret "webhook-server-cert" not found Jan 27 07:08:54 crc kubenswrapper[4872]: E0127 07:08:54.275182 4872 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:08:54 crc kubenswrapper[4872]: E0127 07:08:54.275210 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs podName:357cea45-02a1-4431-b6b2-098316ed0c41 nodeName:}" failed. No retries permitted until 2026-01-27 07:08:54.775202237 +0000 UTC m=+911.302677433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs") pod "openstack-operator-controller-manager-58b6ccd9d-xft2r" (UID: "357cea45-02a1-4431-b6b2-098316ed0c41") : secret "metrics-server-cert" not found Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.341284 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qkvf8" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.343539 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r"] Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.351467 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-v4fmr" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.375206 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8hgb\" (UniqueName: \"kubernetes.io/projected/357cea45-02a1-4431-b6b2-098316ed0c41-kube-api-access-b8hgb\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.376704 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sw8lg"] Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.377634 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sw8lg" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.382100 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-9mp6b" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.424206 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sw8lg"] Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.476455 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854tk59h\" (UID: \"91a0860c-c480-41da-8adb-b3c902e61b32\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" Jan 27 07:08:54 crc kubenswrapper[4872]: E0127 07:08:54.476747 4872 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:08:54 crc kubenswrapper[4872]: E0127 07:08:54.476804 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert podName:91a0860c-c480-41da-8adb-b3c902e61b32 nodeName:}" failed. No retries permitted until 2026-01-27 07:08:55.47677672 +0000 UTC m=+912.004251916 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" (UID: "91a0860c-c480-41da-8adb-b3c902e61b32") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:08:54 crc kubenswrapper[4872]: W0127 07:08:54.510033 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08fccf6d_9f03_4348_968b_8a0db7e49d41.slice/crio-54c302d85f8501b9a1a7f0133fdee7c7d20227e560ba9f2d7173f3cc548b3555 WatchSource:0}: Error finding container 54c302d85f8501b9a1a7f0133fdee7c7d20227e560ba9f2d7173f3cc548b3555: Status 404 returned error can't find the container with id 54c302d85f8501b9a1a7f0133fdee7c7d20227e560ba9f2d7173f3cc548b3555 Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.547921 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-mvrf5"] Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.578071 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbskh\" (UniqueName: \"kubernetes.io/projected/43a2af22-fbbc-40ad-ad0b-f787adca58a8-kube-api-access-jbskh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sw8lg\" (UID: \"43a2af22-fbbc-40ad-ad0b-f787adca58a8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sw8lg" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.683319 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbskh\" (UniqueName: \"kubernetes.io/projected/43a2af22-fbbc-40ad-ad0b-f787adca58a8-kube-api-access-jbskh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sw8lg\" (UID: \"43a2af22-fbbc-40ad-ad0b-f787adca58a8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sw8lg" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.698663 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-679885cc9c-4ks2c"] Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.706541 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbskh\" (UniqueName: \"kubernetes.io/projected/43a2af22-fbbc-40ad-ad0b-f787adca58a8-kube-api-access-jbskh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sw8lg\" (UID: \"43a2af22-fbbc-40ad-ad0b-f787adca58a8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sw8lg" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.766219 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sw8lg" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.784427 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.784499 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:08:54 crc kubenswrapper[4872]: E0127 07:08:54.784604 4872 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:08:54 crc kubenswrapper[4872]: E0127 07:08:54.784646 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs podName:357cea45-02a1-4431-b6b2-098316ed0c41 nodeName:}" failed. No retries permitted until 2026-01-27 07:08:55.784633239 +0000 UTC m=+912.312108435 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs") pod "openstack-operator-controller-manager-58b6ccd9d-xft2r" (UID: "357cea45-02a1-4431-b6b2-098316ed0c41") : secret "webhook-server-cert" not found Jan 27 07:08:54 crc kubenswrapper[4872]: E0127 07:08:54.784961 4872 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:08:54 crc kubenswrapper[4872]: E0127 07:08:54.784987 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs podName:357cea45-02a1-4431-b6b2-098316ed0c41 nodeName:}" failed. No retries permitted until 2026-01-27 07:08:55.784979748 +0000 UTC m=+912.312454944 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs") pod "openstack-operator-controller-manager-58b6ccd9d-xft2r" (UID: "357cea45-02a1-4431-b6b2-098316ed0c41") : secret "metrics-server-cert" not found Jan 27 07:08:54 crc kubenswrapper[4872]: W0127 07:08:54.843972 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb73bd36b_309b_4692_94d3_80de8061ee1c.slice/crio-6cb21d0f04a04dd9631a0cf16305a9618c7362df5ba63b192e8e40075f9e25e8 WatchSource:0}: Error finding container 6cb21d0f04a04dd9631a0cf16305a9618c7362df5ba63b192e8e40075f9e25e8: Status 404 returned error can't find the container with id 6cb21d0f04a04dd9631a0cf16305a9618c7362df5ba63b192e8e40075f9e25e8 Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.921021 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhgwq"] Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.943968 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-4l5gt"] Jan 27 07:08:54 crc kubenswrapper[4872]: W0127 07:08:54.952544 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c623fe5_f07e_41cf_be56_97a412385609.slice/crio-5a1bd8bb2206ae8e6032df261b84be915de5a9321c49e34ad7e2682fc0209e90 WatchSource:0}: Error finding container 5a1bd8bb2206ae8e6032df261b84be915de5a9321c49e34ad7e2682fc0209e90: Status 404 returned error can't find the container with id 5a1bd8bb2206ae8e6032df261b84be915de5a9321c49e34ad7e2682fc0209e90 Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.956708 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-cqrlm"] Jan 27 07:08:54 crc kubenswrapper[4872]: W0127 07:08:54.965053 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d14310b_b74d_4d74_9c25_32dc8ac6f8c3.slice/crio-e9ad98b51765b044c8c9e44dbef470f230b3bd3051d15bb2134832bf1dd030be WatchSource:0}: Error finding container e9ad98b51765b044c8c9e44dbef470f230b3bd3051d15bb2134832bf1dd030be: Status 404 returned error can't find the container with id e9ad98b51765b044c8c9e44dbef470f230b3bd3051d15bb2134832bf1dd030be Jan 27 07:08:54 crc kubenswrapper[4872]: I0127 07:08:54.996341 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-w76sk\" (UID: \"2e29beed-1f67-4af9-bd04-26453b885725\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" Jan 27 07:08:55 crc kubenswrapper[4872]: E0127 07:08:54.996486 4872 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:08:55 crc kubenswrapper[4872]: E0127 07:08:55.024655 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert podName:2e29beed-1f67-4af9-bd04-26453b885725 nodeName:}" failed. No retries permitted until 2026-01-27 07:08:57.024636335 +0000 UTC m=+913.552111521 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert") pod "infra-operator-controller-manager-7d75bc88d5-w76sk" (UID: "2e29beed-1f67-4af9-bd04-26453b885725") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.001064 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.025019 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.102832 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-r8hvq"] Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.121212 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-znhq5"] Jan 27 07:08:55 crc kubenswrapper[4872]: W0127 07:08:55.125994 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6e3600b_d99a_47f3_abe9_e130e507eba6.slice/crio-de06efaa0778ab7943607ec4b1320d48198542e18c924543e8628f61f42e3961 WatchSource:0}: Error finding container de06efaa0778ab7943607ec4b1320d48198542e18c924543e8628f61f42e3961: Status 404 returned error can't find the container with id de06efaa0778ab7943607ec4b1320d48198542e18c924543e8628f61f42e3961 Jan 27 07:08:55 crc kubenswrapper[4872]: W0127 07:08:55.131869 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc917795_9a7c_4904_97f6_504995bb037f.slice/crio-192ff824842d160b49b5e51d065732bfba2004aad6d72f42d9926d49a882043a WatchSource:0}: Error finding container 192ff824842d160b49b5e51d065732bfba2004aad6d72f42d9926d49a882043a: Status 404 returned error can't find the container with id 192ff824842d160b49b5e51d065732bfba2004aad6d72f42d9926d49a882043a Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.170319 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-74866cc64d-w4rmt"] Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.186317 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-s28bs"] Jan 27 07:08:55 crc kubenswrapper[4872]: W0127 07:08:55.214114 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod021fb455_7c8a_4fac_8c26_2a24c605f4e0.slice/crio-f247ebf68084d08115f5479d29af51034eec154a8557b9777b32d020f07871b1 WatchSource:0}: Error finding container f247ebf68084d08115f5479d29af51034eec154a8557b9777b32d020f07871b1: Status 404 returned error can't find the container with id f247ebf68084d08115f5479d29af51034eec154a8557b9777b32d020f07871b1 Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.345782 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-5qbml"] Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.473864 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-5pnr4"] Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.481832 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4"] Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.496516 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-q2bpc"] Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.506180 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-ztfzb"] Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.514540 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-jdjg2"] Jan 27 07:08:55 crc kubenswrapper[4872]: W0127 07:08:55.519592 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee054c5f_dff0_4c0f_84e1_1aca8b17b9e9.slice/crio-4e4f0a9e2a08ff98e25f3430f01af0578d60437712cb232b08c78bce77675a11 WatchSource:0}: Error finding container 4e4f0a9e2a08ff98e25f3430f01af0578d60437712cb232b08c78bce77675a11: Status 404 returned error can't find the container with id 4e4f0a9e2a08ff98e25f3430f01af0578d60437712cb232b08c78bce77675a11 Jan 27 07:08:55 crc kubenswrapper[4872]: W0127 07:08:55.538142 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7171bbfc_1ce9_4f32_9005_64f4b355756a.slice/crio-edea0e1d2222bdfd1e7ac8060a323f681dffc1c587e5a5e67a1e24ed5f59ddda WatchSource:0}: Error finding container edea0e1d2222bdfd1e7ac8060a323f681dffc1c587e5a5e67a1e24ed5f59ddda: Status 404 returned error can't find the container with id edea0e1d2222bdfd1e7ac8060a323f681dffc1c587e5a5e67a1e24ed5f59ddda Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.540130 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-w4rmt" event={"ID":"021fb455-7c8a-4fac-8c26-2a24c605f4e0","Type":"ContainerStarted","Data":"f247ebf68084d08115f5479d29af51034eec154a8557b9777b32d020f07871b1"} Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.541388 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854tk59h\" (UID: \"91a0860c-c480-41da-8adb-b3c902e61b32\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" Jan 27 07:08:55 crc kubenswrapper[4872]: E0127 07:08:55.541561 4872 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:08:55 crc kubenswrapper[4872]: E0127 07:08:55.541594 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert podName:91a0860c-c480-41da-8adb-b3c902e61b32 nodeName:}" failed. No retries permitted until 2026-01-27 07:08:57.541582471 +0000 UTC m=+914.069057667 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" (UID: "91a0860c-c480-41da-8adb-b3c902e61b32") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.545000 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-q2bpc" event={"ID":"ee054c5f-dff0-4c0f-84e1-1aca8b17b9e9","Type":"ContainerStarted","Data":"4e4f0a9e2a08ff98e25f3430f01af0578d60437712cb232b08c78bce77675a11"} Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.548675 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4l5gt" event={"ID":"40a630f8-5dec-444b-aca1-73e4b2b76e40","Type":"ContainerStarted","Data":"70f9ba8a966ac4c61c7b912307af853abce0d98e912b4d48f68a5f612e28f878"} Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.551203 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cqrlm" event={"ID":"0d14310b-b74d-4d74-9c25-32dc8ac6f8c3","Type":"ContainerStarted","Data":"e9ad98b51765b044c8c9e44dbef470f230b3bd3051d15bb2134832bf1dd030be"} Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.553255 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-679885cc9c-4ks2c" event={"ID":"b73bd36b-309b-4692-94d3-80de8061ee1c","Type":"ContainerStarted","Data":"6cb21d0f04a04dd9631a0cf16305a9618c7362df5ba63b192e8e40075f9e25e8"} Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.557082 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5pnr4" event={"ID":"5bb731bb-f2e5-4022-b291-40158a3e9195","Type":"ContainerStarted","Data":"d90c5714202134123a4f5ec33a6fdfbbaaed152d626f35c0b95fd69e7076a6c4"} Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.562094 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-znhq5" event={"ID":"d6e3600b-d99a-47f3-abe9-e130e507eba6","Type":"ContainerStarted","Data":"de06efaa0778ab7943607ec4b1320d48198542e18c924543e8628f61f42e3961"} Jan 27 07:08:55 crc kubenswrapper[4872]: W0127 07:08:55.562418 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6541aa0_da81_49ed_be52_db0df5f2199f.slice/crio-794e0667ab2531a6fbde8264dbed824b85c8de692433222d4c19469a6988d04a WatchSource:0}: Error finding container 794e0667ab2531a6fbde8264dbed824b85c8de692433222d4c19469a6988d04a: Status 404 returned error can't find the container with id 794e0667ab2531a6fbde8264dbed824b85c8de692433222d4c19469a6988d04a Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.564438 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-r8hvq" event={"ID":"dc917795-9a7c-4904-97f6-504995bb037f","Type":"ContainerStarted","Data":"192ff824842d160b49b5e51d065732bfba2004aad6d72f42d9926d49a882043a"} Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.581596 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-5qbml" event={"ID":"37ddb430-0905-4430-990c-1a9122983760","Type":"ContainerStarted","Data":"d2a576f8a8a5c64f73784207e4a1a3fd1302dd260a04ac34698c44dd585196ba"} Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.598232 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-s28bs" event={"ID":"0e770853-8cff-46d8-88a6-4b879002bb47","Type":"ContainerStarted","Data":"4b2c9555e5b0cdfaad8e600ef8480ab456f8fcefe9da53892172e12d49a12ed0"} Jan 27 07:08:55 crc kubenswrapper[4872]: E0127 07:08:55.598967 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p7579,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-jdjg2_openstack-operators(5d51e7b0-0af8-4529-8669-208e071a494a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:08:55 crc kubenswrapper[4872]: E0127 07:08:55.600129 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jdjg2" podUID="5d51e7b0-0af8-4529-8669-208e071a494a" Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.604290 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhgwq" event={"ID":"4c623fe5-f07e-41cf-be56-97a412385609","Type":"ContainerStarted","Data":"5a1bd8bb2206ae8e6032df261b84be915de5a9321c49e34ad7e2682fc0209e90"} Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.609409 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-mvrf5" event={"ID":"08fccf6d-9f03-4348-968b-8a0db7e49d41","Type":"ContainerStarted","Data":"54c302d85f8501b9a1a7f0133fdee7c7d20227e560ba9f2d7173f3cc548b3555"} Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.704374 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-db4rp"] Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.722234 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75db85654f-v4fmr"] Jan 27 07:08:55 crc kubenswrapper[4872]: E0127 07:08:55.726915 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jq6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-db4rp_openstack-operators(1123a944-c0b8-4b09-a177-bdb16206d1fc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:08:55 crc kubenswrapper[4872]: E0127 07:08:55.726966 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jbskh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-sw8lg_openstack-operators(43a2af22-fbbc-40ad-ad0b-f787adca58a8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:08:55 crc kubenswrapper[4872]: E0127 07:08:55.730584 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7n8dx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-qkvf8_openstack-operators(ab06f84a-a334-4a56-87da-f9ffe0026051): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:08:55 crc kubenswrapper[4872]: E0127 07:08:55.730685 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sw8lg" podUID="43a2af22-fbbc-40ad-ad0b-f787adca58a8" Jan 27 07:08:55 crc kubenswrapper[4872]: E0127 07:08:55.730727 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-db4rp" podUID="1123a944-c0b8-4b09-a177-bdb16206d1fc" Jan 27 07:08:55 crc kubenswrapper[4872]: E0127 07:08:55.732497 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qkvf8" podUID="ab06f84a-a334-4a56-87da-f9ffe0026051" Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.736510 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sw8lg"] Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.742900 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-qkvf8"] Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.849126 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.849233 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:08:55 crc kubenswrapper[4872]: E0127 07:08:55.849141 4872 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:08:55 crc kubenswrapper[4872]: E0127 07:08:55.849301 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs podName:357cea45-02a1-4431-b6b2-098316ed0c41 nodeName:}" failed. No retries permitted until 2026-01-27 07:08:57.849283406 +0000 UTC m=+914.376758602 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs") pod "openstack-operator-controller-manager-58b6ccd9d-xft2r" (UID: "357cea45-02a1-4431-b6b2-098316ed0c41") : secret "metrics-server-cert" not found Jan 27 07:08:55 crc kubenswrapper[4872]: E0127 07:08:55.849414 4872 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:08:55 crc kubenswrapper[4872]: E0127 07:08:55.849443 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs podName:357cea45-02a1-4431-b6b2-098316ed0c41 nodeName:}" failed. No retries permitted until 2026-01-27 07:08:57.84943478 +0000 UTC m=+914.376909976 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs") pod "openstack-operator-controller-manager-58b6ccd9d-xft2r" (UID: "357cea45-02a1-4431-b6b2-098316ed0c41") : secret "webhook-server-cert" not found Jan 27 07:08:55 crc kubenswrapper[4872]: I0127 07:08:55.854805 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-7zrdn"] Jan 27 07:08:55 crc kubenswrapper[4872]: W0127 07:08:55.870578 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07268bac_40d7_4b33_be2b_51b3664ef473.slice/crio-cbe60c1da4481915a759a2bd48d6c43050ea0315ae0d873287b9e5c30364cc63 WatchSource:0}: Error finding container cbe60c1da4481915a759a2bd48d6c43050ea0315ae0d873287b9e5c30364cc63: Status 404 returned error can't find the container with id cbe60c1da4481915a759a2bd48d6c43050ea0315ae0d873287b9e5c30364cc63 Jan 27 07:08:56 crc kubenswrapper[4872]: I0127 07:08:56.649442 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-db4rp" event={"ID":"1123a944-c0b8-4b09-a177-bdb16206d1fc","Type":"ContainerStarted","Data":"baa3a1255d4477d333ba803de36ac769d31c6f1f6a3583ec83f9e0ef9d9dd280"} Jan 27 07:08:56 crc kubenswrapper[4872]: E0127 07:08:56.652067 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-db4rp" podUID="1123a944-c0b8-4b09-a177-bdb16206d1fc" Jan 27 07:08:56 crc kubenswrapper[4872]: I0127 07:08:56.653340 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jdjg2" event={"ID":"5d51e7b0-0af8-4529-8669-208e071a494a","Type":"ContainerStarted","Data":"8b4485eb4163609af3105e5acbd8faf5b77da5fe5da7458a7b93a775b8acfb1c"} Jan 27 07:08:56 crc kubenswrapper[4872]: I0127 07:08:56.657231 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sw8lg" event={"ID":"43a2af22-fbbc-40ad-ad0b-f787adca58a8","Type":"ContainerStarted","Data":"2a01ec2fc63c0d6f2f5c98b3a83a01d2dbd0479a685e323947593776e0b9c98a"} Jan 27 07:08:56 crc kubenswrapper[4872]: I0127 07:08:56.658582 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4" event={"ID":"7171bbfc-1ce9-4f32-9005-64f4b355756a","Type":"ContainerStarted","Data":"edea0e1d2222bdfd1e7ac8060a323f681dffc1c587e5a5e67a1e24ed5f59ddda"} Jan 27 07:08:56 crc kubenswrapper[4872]: E0127 07:08:56.659758 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jdjg2" podUID="5d51e7b0-0af8-4529-8669-208e071a494a" Jan 27 07:08:56 crc kubenswrapper[4872]: E0127 07:08:56.659892 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sw8lg" podUID="43a2af22-fbbc-40ad-ad0b-f787adca58a8" Jan 27 07:08:56 crc kubenswrapper[4872]: I0127 07:08:56.673426 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-v4fmr" event={"ID":"5a0744a4-d705-4fce-9d51-0646103fd458","Type":"ContainerStarted","Data":"4751aecdf548c8c9adc7b7a8ebf36c4e5a2d7d0e78bdfe821375aaf3af4eec6b"} Jan 27 07:08:56 crc kubenswrapper[4872]: I0127 07:08:56.674469 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-ztfzb" event={"ID":"a6541aa0-da81-49ed-be52-db0df5f2199f","Type":"ContainerStarted","Data":"794e0667ab2531a6fbde8264dbed824b85c8de692433222d4c19469a6988d04a"} Jan 27 07:08:56 crc kubenswrapper[4872]: I0127 07:08:56.676312 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qkvf8" event={"ID":"ab06f84a-a334-4a56-87da-f9ffe0026051","Type":"ContainerStarted","Data":"f991b6da947c6f37af4a79c1eb96a66f89c25ca3b46480fbf33f68f94f986965"} Jan 27 07:08:56 crc kubenswrapper[4872]: E0127 07:08:56.690801 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qkvf8" podUID="ab06f84a-a334-4a56-87da-f9ffe0026051" Jan 27 07:08:56 crc kubenswrapper[4872]: I0127 07:08:56.715926 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-7zrdn" event={"ID":"07268bac-40d7-4b33-be2b-51b3664ef473","Type":"ContainerStarted","Data":"cbe60c1da4481915a759a2bd48d6c43050ea0315ae0d873287b9e5c30364cc63"} Jan 27 07:08:57 crc kubenswrapper[4872]: I0127 07:08:57.079428 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-w76sk\" (UID: \"2e29beed-1f67-4af9-bd04-26453b885725\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" Jan 27 07:08:57 crc kubenswrapper[4872]: E0127 07:08:57.079697 4872 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:08:57 crc kubenswrapper[4872]: E0127 07:08:57.079905 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert podName:2e29beed-1f67-4af9-bd04-26453b885725 nodeName:}" failed. No retries permitted until 2026-01-27 07:09:01.07988456 +0000 UTC m=+917.607359756 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert") pod "infra-operator-controller-manager-7d75bc88d5-w76sk" (UID: "2e29beed-1f67-4af9-bd04-26453b885725") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:08:57 crc kubenswrapper[4872]: I0127 07:08:57.590777 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854tk59h\" (UID: \"91a0860c-c480-41da-8adb-b3c902e61b32\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" Jan 27 07:08:57 crc kubenswrapper[4872]: E0127 07:08:57.591141 4872 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:08:57 crc kubenswrapper[4872]: E0127 07:08:57.591199 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert podName:91a0860c-c480-41da-8adb-b3c902e61b32 nodeName:}" failed. No retries permitted until 2026-01-27 07:09:01.591185243 +0000 UTC m=+918.118660429 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" (UID: "91a0860c-c480-41da-8adb-b3c902e61b32") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:08:57 crc kubenswrapper[4872]: E0127 07:08:57.731824 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-db4rp" podUID="1123a944-c0b8-4b09-a177-bdb16206d1fc" Jan 27 07:08:57 crc kubenswrapper[4872]: E0127 07:08:57.732976 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qkvf8" podUID="ab06f84a-a334-4a56-87da-f9ffe0026051" Jan 27 07:08:57 crc kubenswrapper[4872]: E0127 07:08:57.732834 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sw8lg" podUID="43a2af22-fbbc-40ad-ad0b-f787adca58a8" Jan 27 07:08:57 crc kubenswrapper[4872]: E0127 07:08:57.733028 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jdjg2" podUID="5d51e7b0-0af8-4529-8669-208e071a494a" Jan 27 07:08:57 crc kubenswrapper[4872]: I0127 07:08:57.898731 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:08:57 crc kubenswrapper[4872]: I0127 07:08:57.898826 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:08:57 crc kubenswrapper[4872]: E0127 07:08:57.898957 4872 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:08:57 crc kubenswrapper[4872]: E0127 07:08:57.899002 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs podName:357cea45-02a1-4431-b6b2-098316ed0c41 nodeName:}" failed. No retries permitted until 2026-01-27 07:09:01.898988571 +0000 UTC m=+918.426463767 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs") pod "openstack-operator-controller-manager-58b6ccd9d-xft2r" (UID: "357cea45-02a1-4431-b6b2-098316ed0c41") : secret "metrics-server-cert" not found Jan 27 07:08:57 crc kubenswrapper[4872]: E0127 07:08:57.899375 4872 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:08:57 crc kubenswrapper[4872]: E0127 07:08:57.899407 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs podName:357cea45-02a1-4431-b6b2-098316ed0c41 nodeName:}" failed. No retries permitted until 2026-01-27 07:09:01.899398211 +0000 UTC m=+918.426873407 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs") pod "openstack-operator-controller-manager-58b6ccd9d-xft2r" (UID: "357cea45-02a1-4431-b6b2-098316ed0c41") : secret "webhook-server-cert" not found Jan 27 07:09:01 crc kubenswrapper[4872]: I0127 07:09:01.148145 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-w76sk\" (UID: \"2e29beed-1f67-4af9-bd04-26453b885725\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" Jan 27 07:09:01 crc kubenswrapper[4872]: E0127 07:09:01.148383 4872 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:09:01 crc kubenswrapper[4872]: E0127 07:09:01.148682 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert podName:2e29beed-1f67-4af9-bd04-26453b885725 nodeName:}" failed. No retries permitted until 2026-01-27 07:09:09.148639695 +0000 UTC m=+925.676114901 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert") pod "infra-operator-controller-manager-7d75bc88d5-w76sk" (UID: "2e29beed-1f67-4af9-bd04-26453b885725") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:09:01 crc kubenswrapper[4872]: I0127 07:09:01.655730 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854tk59h\" (UID: \"91a0860c-c480-41da-8adb-b3c902e61b32\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" Jan 27 07:09:01 crc kubenswrapper[4872]: E0127 07:09:01.655909 4872 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:09:01 crc kubenswrapper[4872]: E0127 07:09:01.656259 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert podName:91a0860c-c480-41da-8adb-b3c902e61b32 nodeName:}" failed. No retries permitted until 2026-01-27 07:09:09.656239618 +0000 UTC m=+926.183714834 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" (UID: "91a0860c-c480-41da-8adb-b3c902e61b32") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:09:01 crc kubenswrapper[4872]: I0127 07:09:01.960264 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:09:01 crc kubenswrapper[4872]: I0127 07:09:01.960377 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:09:01 crc kubenswrapper[4872]: E0127 07:09:01.960468 4872 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:09:01 crc kubenswrapper[4872]: E0127 07:09:01.960514 4872 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:09:01 crc kubenswrapper[4872]: E0127 07:09:01.960538 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs podName:357cea45-02a1-4431-b6b2-098316ed0c41 nodeName:}" failed. No retries permitted until 2026-01-27 07:09:09.960520899 +0000 UTC m=+926.487996095 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs") pod "openstack-operator-controller-manager-58b6ccd9d-xft2r" (UID: "357cea45-02a1-4431-b6b2-098316ed0c41") : secret "metrics-server-cert" not found Jan 27 07:09:01 crc kubenswrapper[4872]: E0127 07:09:01.960563 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs podName:357cea45-02a1-4431-b6b2-098316ed0c41 nodeName:}" failed. No retries permitted until 2026-01-27 07:09:09.96054913 +0000 UTC m=+926.488024326 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs") pod "openstack-operator-controller-manager-58b6ccd9d-xft2r" (UID: "357cea45-02a1-4431-b6b2-098316ed0c41") : secret "webhook-server-cert" not found Jan 27 07:09:07 crc kubenswrapper[4872]: E0127 07:09:07.808781 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f" Jan 27 07:09:07 crc kubenswrapper[4872]: E0127 07:09:07.809385 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zgnd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-77554cdc5c-4l5gt_openstack-operators(40a630f8-5dec-444b-aca1-73e4b2b76e40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:09:07 crc kubenswrapper[4872]: E0127 07:09:07.812537 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4l5gt" podUID="40a630f8-5dec-444b-aca1-73e4b2b76e40" Jan 27 07:09:08 crc kubenswrapper[4872]: E0127 07:09:08.809318 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f\\\"\"" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4l5gt" podUID="40a630f8-5dec-444b-aca1-73e4b2b76e40" Jan 27 07:09:09 crc kubenswrapper[4872]: I0127 07:09:09.207537 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-w76sk\" (UID: \"2e29beed-1f67-4af9-bd04-26453b885725\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" Jan 27 07:09:09 crc kubenswrapper[4872]: I0127 07:09:09.213836 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e29beed-1f67-4af9-bd04-26453b885725-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-w76sk\" (UID: \"2e29beed-1f67-4af9-bd04-26453b885725\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" Jan 27 07:09:09 crc kubenswrapper[4872]: E0127 07:09:09.325185 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/watcher-operator@sha256:01f06b67539933628ebeb3c8fe813467eb6b478ba9fd4c6ff9892b8306e04f7a" Jan 27 07:09:09 crc kubenswrapper[4872]: E0127 07:09:09.325371 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:01f06b67539933628ebeb3c8fe813467eb6b478ba9fd4c6ff9892b8306e04f7a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tjx59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75db85654f-v4fmr_openstack-operators(5a0744a4-d705-4fce-9d51-0646103fd458): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:09:09 crc kubenswrapper[4872]: E0127 07:09:09.326527 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-v4fmr" podUID="5a0744a4-d705-4fce-9d51-0646103fd458" Jan 27 07:09:09 crc kubenswrapper[4872]: I0127 07:09:09.464981 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" Jan 27 07:09:09 crc kubenswrapper[4872]: I0127 07:09:09.716624 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854tk59h\" (UID: \"91a0860c-c480-41da-8adb-b3c902e61b32\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" Jan 27 07:09:09 crc kubenswrapper[4872]: I0127 07:09:09.719890 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91a0860c-c480-41da-8adb-b3c902e61b32-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854tk59h\" (UID: \"91a0860c-c480-41da-8adb-b3c902e61b32\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" Jan 27 07:09:09 crc kubenswrapper[4872]: I0127 07:09:09.769637 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" Jan 27 07:09:09 crc kubenswrapper[4872]: E0127 07:09:09.813043 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:01f06b67539933628ebeb3c8fe813467eb6b478ba9fd4c6ff9892b8306e04f7a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-v4fmr" podUID="5a0744a4-d705-4fce-9d51-0646103fd458" Jan 27 07:09:10 crc kubenswrapper[4872]: I0127 07:09:10.022348 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:09:10 crc kubenswrapper[4872]: I0127 07:09:10.022457 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:09:10 crc kubenswrapper[4872]: E0127 07:09:10.022576 4872 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:09:10 crc kubenswrapper[4872]: E0127 07:09:10.022647 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs podName:357cea45-02a1-4431-b6b2-098316ed0c41 nodeName:}" failed. No retries permitted until 2026-01-27 07:09:26.022633325 +0000 UTC m=+942.550108521 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs") pod "openstack-operator-controller-manager-58b6ccd9d-xft2r" (UID: "357cea45-02a1-4431-b6b2-098316ed0c41") : secret "webhook-server-cert" not found Jan 27 07:09:10 crc kubenswrapper[4872]: I0127 07:09:10.050242 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:09:13 crc kubenswrapper[4872]: E0127 07:09:13.369054 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 27 07:09:13 crc kubenswrapper[4872]: E0127 07:09:13.369672 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9bhlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4_openstack-operators(7171bbfc-1ce9-4f32-9005-64f4b355756a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:09:13 crc kubenswrapper[4872]: E0127 07:09:13.371085 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4" podUID="7171bbfc-1ce9-4f32-9005-64f4b355756a" Jan 27 07:09:13 crc kubenswrapper[4872]: E0127 07:09:13.844004 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4" podUID="7171bbfc-1ce9-4f32-9005-64f4b355756a" Jan 27 07:09:14 crc kubenswrapper[4872]: E0127 07:09:14.598616 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84" Jan 27 07:09:14 crc kubenswrapper[4872]: E0127 07:09:14.598816 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2kxmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-849fcfbb6b-r8hvq_openstack-operators(dc917795-9a7c-4904-97f6-504995bb037f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:09:14 crc kubenswrapper[4872]: E0127 07:09:14.600045 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-r8hvq" podUID="dc917795-9a7c-4904-97f6-504995bb037f" Jan 27 07:09:14 crc kubenswrapper[4872]: E0127 07:09:14.673772 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49" Jan 27 07:09:14 crc kubenswrapper[4872]: E0127 07:09:14.673949 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4zp24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7875d7675-q2bpc_openstack-operators(ee054c5f-dff0-4c0f-84e1-1aca8b17b9e9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:09:14 crc kubenswrapper[4872]: E0127 07:09:14.675127 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-q2bpc" podUID="ee054c5f-dff0-4c0f-84e1-1aca8b17b9e9" Jan 27 07:09:14 crc kubenswrapper[4872]: E0127 07:09:14.850820 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-q2bpc" podUID="ee054c5f-dff0-4c0f-84e1-1aca8b17b9e9" Jan 27 07:09:14 crc kubenswrapper[4872]: E0127 07:09:14.853797 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84\\\"\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-r8hvq" podUID="dc917795-9a7c-4904-97f6-504995bb037f" Jan 27 07:09:16 crc kubenswrapper[4872]: E0127 07:09:16.503152 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569" Jan 27 07:09:16 crc kubenswrapper[4872]: E0127 07:09:16.503611 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjvvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7ffd8d76d4-znhq5_openstack-operators(d6e3600b-d99a-47f3-abe9-e130e507eba6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:09:16 crc kubenswrapper[4872]: E0127 07:09:16.504992 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-znhq5" podUID="d6e3600b-d99a-47f3-abe9-e130e507eba6" Jan 27 07:09:16 crc kubenswrapper[4872]: E0127 07:09:16.863520 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-znhq5" podUID="d6e3600b-d99a-47f3-abe9-e130e507eba6" Jan 27 07:09:17 crc kubenswrapper[4872]: E0127 07:09:17.263395 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/ironic-operator@sha256:30e2224475338d3a02d617ae147dc7dc09867cce4ac3543b313a1923c46299fa" Jan 27 07:09:17 crc kubenswrapper[4872]: E0127 07:09:17.263588 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/ironic-operator@sha256:30e2224475338d3a02d617ae147dc7dc09867cce4ac3543b313a1923c46299fa,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mffj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-768b776ffb-s28bs_openstack-operators(0e770853-8cff-46d8-88a6-4b879002bb47): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:09:17 crc kubenswrapper[4872]: E0127 07:09:17.264931 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-s28bs" podUID="0e770853-8cff-46d8-88a6-4b879002bb47" Jan 27 07:09:17 crc kubenswrapper[4872]: E0127 07:09:17.770240 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 27 07:09:17 crc kubenswrapper[4872]: E0127 07:09:17.770420 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dj28z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-5pnr4_openstack-operators(5bb731bb-f2e5-4022-b291-40158a3e9195): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:09:17 crc kubenswrapper[4872]: E0127 07:09:17.771569 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5pnr4" podUID="5bb731bb-f2e5-4022-b291-40158a3e9195" Jan 27 07:09:17 crc kubenswrapper[4872]: E0127 07:09:17.835152 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.106:5001/openstack-k8s-operators/barbican-operator:3c1a406634e0a878d497246fb050dbdaba3467e1" Jan 27 07:09:17 crc kubenswrapper[4872]: E0127 07:09:17.835916 4872 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.106:5001/openstack-k8s-operators/barbican-operator:3c1a406634e0a878d497246fb050dbdaba3467e1" Jan 27 07:09:17 crc kubenswrapper[4872]: E0127 07:09:17.836076 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.106:5001/openstack-k8s-operators/barbican-operator:3c1a406634e0a878d497246fb050dbdaba3467e1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7zsfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-679885cc9c-4ks2c_openstack-operators(b73bd36b-309b-4692-94d3-80de8061ee1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:09:17 crc kubenswrapper[4872]: E0127 07:09:17.837476 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-679885cc9c-4ks2c" podUID="b73bd36b-309b-4692-94d3-80de8061ee1c" Jan 27 07:09:17 crc kubenswrapper[4872]: E0127 07:09:17.873336 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5pnr4" podUID="5bb731bb-f2e5-4022-b291-40158a3e9195" Jan 27 07:09:17 crc kubenswrapper[4872]: E0127 07:09:17.874221 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:30e2224475338d3a02d617ae147dc7dc09867cce4ac3543b313a1923c46299fa\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-s28bs" podUID="0e770853-8cff-46d8-88a6-4b879002bb47" Jan 27 07:09:17 crc kubenswrapper[4872]: E0127 07:09:17.874702 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.106:5001/openstack-k8s-operators/barbican-operator:3c1a406634e0a878d497246fb050dbdaba3467e1\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-679885cc9c-4ks2c" podUID="b73bd36b-309b-4692-94d3-80de8061ee1c" Jan 27 07:09:22 crc kubenswrapper[4872]: E0127 07:09:22.039043 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:dbde47574a2204e5cb6af468e5c74df5124b1daab0ebcb0dc8c489fa40c8942f" Jan 27 07:09:22 crc kubenswrapper[4872]: E0127 07:09:22.040135 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:dbde47574a2204e5cb6af468e5c74df5124b1daab0ebcb0dc8c489fa40c8942f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m2x7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7f54b7d6d4-ztfzb_openstack-operators(a6541aa0-da81-49ed-be52-db0df5f2199f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:09:22 crc kubenswrapper[4872]: E0127 07:09:22.041963 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-ztfzb" podUID="a6541aa0-da81-49ed-be52-db0df5f2199f" Jan 27 07:09:22 crc kubenswrapper[4872]: E0127 07:09:22.504876 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 27 07:09:22 crc kubenswrapper[4872]: E0127 07:09:22.505044 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jbskh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-sw8lg_openstack-operators(43a2af22-fbbc-40ad-ad0b-f787adca58a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:09:22 crc kubenswrapper[4872]: E0127 07:09:22.506198 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sw8lg" podUID="43a2af22-fbbc-40ad-ad0b-f787adca58a8" Jan 27 07:09:22 crc kubenswrapper[4872]: I0127 07:09:22.919442 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-mvrf5" event={"ID":"08fccf6d-9f03-4348-968b-8a0db7e49d41","Type":"ContainerStarted","Data":"d899a79557f842c0ec29942b9d43e727c8ceea153510689a987f180b8c05af5e"} Jan 27 07:09:22 crc kubenswrapper[4872]: I0127 07:09:22.920047 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-mvrf5" Jan 27 07:09:22 crc kubenswrapper[4872]: I0127 07:09:22.926609 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-7zrdn" event={"ID":"07268bac-40d7-4b33-be2b-51b3664ef473","Type":"ContainerStarted","Data":"b0694a60e8d858b5ab6d54d4d5a31230dd3b45b9dfca9f197a4cc92c7e6d3c09"} Jan 27 07:09:22 crc kubenswrapper[4872]: I0127 07:09:22.927047 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-7zrdn" Jan 27 07:09:22 crc kubenswrapper[4872]: I0127 07:09:22.930743 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhgwq" event={"ID":"4c623fe5-f07e-41cf-be56-97a412385609","Type":"ContainerStarted","Data":"865e78ee4a571365089a7fdcd7e7de45e1eb17bfc793a51830acdaa7e5b8dc8e"} Jan 27 07:09:22 crc kubenswrapper[4872]: I0127 07:09:22.930932 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhgwq" Jan 27 07:09:22 crc kubenswrapper[4872]: I0127 07:09:22.935640 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-5qbml" event={"ID":"37ddb430-0905-4430-990c-1a9122983760","Type":"ContainerStarted","Data":"ae95e5d057fff9e068348c7eb107279b23c58ef1e6b790b0ddfbdb8afff7ba78"} Jan 27 07:09:22 crc kubenswrapper[4872]: I0127 07:09:22.935769 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-5qbml" Jan 27 07:09:22 crc kubenswrapper[4872]: E0127 07:09:22.937475 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:dbde47574a2204e5cb6af468e5c74df5124b1daab0ebcb0dc8c489fa40c8942f\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-ztfzb" podUID="a6541aa0-da81-49ed-be52-db0df5f2199f" Jan 27 07:09:22 crc kubenswrapper[4872]: I0127 07:09:22.947141 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-mvrf5" podStartSLOduration=5.238591491 podStartE2EDuration="29.947124974s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:54.60973946 +0000 UTC m=+911.137214656" lastFinishedPulling="2026-01-27 07:09:19.318272913 +0000 UTC m=+935.845748139" observedRunningTime="2026-01-27 07:09:22.944209345 +0000 UTC m=+939.471684561" watchObservedRunningTime="2026-01-27 07:09:22.947124974 +0000 UTC m=+939.474600170" Jan 27 07:09:22 crc kubenswrapper[4872]: I0127 07:09:22.964944 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk"] Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.009442 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-7zrdn" podStartSLOduration=6.563982088 podStartE2EDuration="30.009422975s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:55.873441552 +0000 UTC m=+912.400916748" lastFinishedPulling="2026-01-27 07:09:19.318882439 +0000 UTC m=+935.846357635" observedRunningTime="2026-01-27 07:09:22.978509756 +0000 UTC m=+939.505984952" watchObservedRunningTime="2026-01-27 07:09:23.009422975 +0000 UTC m=+939.536898171" Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.029760 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhgwq" podStartSLOduration=5.6861026930000005 podStartE2EDuration="30.029743218s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:54.97472195 +0000 UTC m=+911.502197146" lastFinishedPulling="2026-01-27 07:09:19.318362475 +0000 UTC m=+935.845837671" observedRunningTime="2026-01-27 07:09:23.028791412 +0000 UTC m=+939.556266598" watchObservedRunningTime="2026-01-27 07:09:23.029743218 +0000 UTC m=+939.557218414" Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.053571 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-5qbml" podStartSLOduration=6.096390272 podStartE2EDuration="30.053551394s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:55.360913936 +0000 UTC m=+911.888389132" lastFinishedPulling="2026-01-27 07:09:19.318075018 +0000 UTC m=+935.845550254" observedRunningTime="2026-01-27 07:09:23.047808698 +0000 UTC m=+939.575283894" watchObservedRunningTime="2026-01-27 07:09:23.053551394 +0000 UTC m=+939.581026590" Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.067715 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h"] Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.945081 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jdjg2" event={"ID":"5d51e7b0-0af8-4529-8669-208e071a494a","Type":"ContainerStarted","Data":"7d17c0de717b06a34bc99d87f004aff733c0da3f2efb3c5cc1d66e9ae9be962a"} Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.945963 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jdjg2" Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.949087 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cqrlm" event={"ID":"0d14310b-b74d-4d74-9c25-32dc8ac6f8c3","Type":"ContainerStarted","Data":"23bbf6624c207c89b3c237b6ccfa079e660413682d6c824e1d682e92f45325d4"} Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.949501 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cqrlm" Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.950731 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qkvf8" event={"ID":"ab06f84a-a334-4a56-87da-f9ffe0026051","Type":"ContainerStarted","Data":"bfd51e514d273d1bfbb1bf70c39813dfbd9401fb7997e5208ef9c74e044c5c7c"} Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.951086 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qkvf8" Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.957345 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-db4rp" event={"ID":"1123a944-c0b8-4b09-a177-bdb16206d1fc","Type":"ContainerStarted","Data":"d9e2c12436085d10665eaf803f5056f6e434292b09f0a00f06ae4376ec0ecb5c"} Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.957760 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-db4rp" Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.969694 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" event={"ID":"2e29beed-1f67-4af9-bd04-26453b885725","Type":"ContainerStarted","Data":"d5779a0b4042ce720ff2cf4e3fe682505e0a4527597425c741e02c141dc0761e"} Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.971963 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" event={"ID":"91a0860c-c480-41da-8adb-b3c902e61b32","Type":"ContainerStarted","Data":"45fc8e606a88339a0024b05996e8f9d9e59a283b0c27f1997373e70cb12bd22c"} Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.992410 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jdjg2" podStartSLOduration=4.006274161 podStartE2EDuration="30.992388605s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:55.596567395 +0000 UTC m=+912.124042591" lastFinishedPulling="2026-01-27 07:09:22.582681839 +0000 UTC m=+939.110157035" observedRunningTime="2026-01-27 07:09:23.989693842 +0000 UTC m=+940.517169038" watchObservedRunningTime="2026-01-27 07:09:23.992388605 +0000 UTC m=+940.519863801" Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.993286 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-v4fmr" event={"ID":"5a0744a4-d705-4fce-9d51-0646103fd458","Type":"ContainerStarted","Data":"08d751e2b1ae79e0309db651f5d541338c371e53816b2155fe6d880856455e46"} Jan 27 07:09:23 crc kubenswrapper[4872]: I0127 07:09:23.993896 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-v4fmr" Jan 27 07:09:24 crc kubenswrapper[4872]: I0127 07:09:24.004749 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-w4rmt" event={"ID":"021fb455-7c8a-4fac-8c26-2a24c605f4e0","Type":"ContainerStarted","Data":"e69b8a504090c7d8d564c0b56235e0cecca31e9e0e8504468871c9f26dedc954"} Jan 27 07:09:24 crc kubenswrapper[4872]: I0127 07:09:24.015403 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4l5gt" event={"ID":"40a630f8-5dec-444b-aca1-73e4b2b76e40","Type":"ContainerStarted","Data":"c3019211fdd51bc38d03e0879f96dee16aaac5d4ba3f07bbd717cd55b240648e"} Jan 27 07:09:24 crc kubenswrapper[4872]: I0127 07:09:24.015805 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4l5gt" Jan 27 07:09:24 crc kubenswrapper[4872]: I0127 07:09:24.028451 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cqrlm" podStartSLOduration=6.684908641 podStartE2EDuration="31.028428944s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:54.975389488 +0000 UTC m=+911.502864684" lastFinishedPulling="2026-01-27 07:09:19.318909771 +0000 UTC m=+935.846384987" observedRunningTime="2026-01-27 07:09:24.028046844 +0000 UTC m=+940.555522060" watchObservedRunningTime="2026-01-27 07:09:24.028428944 +0000 UTC m=+940.555904140" Jan 27 07:09:24 crc kubenswrapper[4872]: I0127 07:09:24.049536 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qkvf8" podStartSLOduration=4.228768963 podStartE2EDuration="31.049516757s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:55.73045137 +0000 UTC m=+912.257926566" lastFinishedPulling="2026-01-27 07:09:22.551199164 +0000 UTC m=+939.078674360" observedRunningTime="2026-01-27 07:09:24.049141627 +0000 UTC m=+940.576616823" watchObservedRunningTime="2026-01-27 07:09:24.049516757 +0000 UTC m=+940.576991953" Jan 27 07:09:24 crc kubenswrapper[4872]: I0127 07:09:24.124122 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4l5gt" podStartSLOduration=3.578260191 podStartE2EDuration="31.124106132s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:55.001468177 +0000 UTC m=+911.528943373" lastFinishedPulling="2026-01-27 07:09:22.547314118 +0000 UTC m=+939.074789314" observedRunningTime="2026-01-27 07:09:24.12257819 +0000 UTC m=+940.650053386" watchObservedRunningTime="2026-01-27 07:09:24.124106132 +0000 UTC m=+940.651581328" Jan 27 07:09:24 crc kubenswrapper[4872]: I0127 07:09:24.128855 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-db4rp" podStartSLOduration=4.27140868 podStartE2EDuration="31.12882363s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:55.726801941 +0000 UTC m=+912.254277137" lastFinishedPulling="2026-01-27 07:09:22.584216891 +0000 UTC m=+939.111692087" observedRunningTime="2026-01-27 07:09:24.094807217 +0000 UTC m=+940.622282423" watchObservedRunningTime="2026-01-27 07:09:24.12882363 +0000 UTC m=+940.656298816" Jan 27 07:09:24 crc kubenswrapper[4872]: I0127 07:09:24.157227 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-v4fmr" podStartSLOduration=3.35727726 podStartE2EDuration="31.157214491s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:55.702764229 +0000 UTC m=+912.230239425" lastFinishedPulling="2026-01-27 07:09:23.50270146 +0000 UTC m=+940.030176656" observedRunningTime="2026-01-27 07:09:24.154954499 +0000 UTC m=+940.682429695" watchObservedRunningTime="2026-01-27 07:09:24.157214491 +0000 UTC m=+940.684689687" Jan 27 07:09:24 crc kubenswrapper[4872]: I0127 07:09:24.175908 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-w4rmt" podStartSLOduration=7.098681137 podStartE2EDuration="31.175890948s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:55.240724543 +0000 UTC m=+911.768199739" lastFinishedPulling="2026-01-27 07:09:19.317934334 +0000 UTC m=+935.845409550" observedRunningTime="2026-01-27 07:09:24.175533178 +0000 UTC m=+940.703008374" watchObservedRunningTime="2026-01-27 07:09:24.175890948 +0000 UTC m=+940.703366144" Jan 27 07:09:25 crc kubenswrapper[4872]: I0127 07:09:25.001248 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:09:25 crc kubenswrapper[4872]: I0127 07:09:25.001590 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:09:25 crc kubenswrapper[4872]: I0127 07:09:25.001647 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 07:09:25 crc kubenswrapper[4872]: I0127 07:09:25.002376 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"20143d97a66f83fcd247e09e3531e769d9734a19dc520edc93aa6f0111b588b3"} pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:09:25 crc kubenswrapper[4872]: I0127 07:09:25.002450 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" containerID="cri-o://20143d97a66f83fcd247e09e3531e769d9734a19dc520edc93aa6f0111b588b3" gracePeriod=600 Jan 27 07:09:25 crc kubenswrapper[4872]: I0127 07:09:25.021164 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-w4rmt" Jan 27 07:09:26 crc kubenswrapper[4872]: I0127 07:09:26.028519 4872 generic.go:334] "Generic (PLEG): container finished" podID="5ea42312-a362-48cd-8387-34c060df18a1" containerID="20143d97a66f83fcd247e09e3531e769d9734a19dc520edc93aa6f0111b588b3" exitCode=0 Jan 27 07:09:26 crc kubenswrapper[4872]: I0127 07:09:26.028717 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerDied","Data":"20143d97a66f83fcd247e09e3531e769d9734a19dc520edc93aa6f0111b588b3"} Jan 27 07:09:26 crc kubenswrapper[4872]: I0127 07:09:26.028769 4872 scope.go:117] "RemoveContainer" containerID="462f190318026a23f51e6663c8fa4063ee29388a381bdeb37568ca3834d2b1f2" Jan 27 07:09:26 crc kubenswrapper[4872]: I0127 07:09:26.100834 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:09:26 crc kubenswrapper[4872]: I0127 07:09:26.114053 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/357cea45-02a1-4431-b6b2-098316ed0c41-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccd9d-xft2r\" (UID: \"357cea45-02a1-4431-b6b2-098316ed0c41\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:09:26 crc kubenswrapper[4872]: I0127 07:09:26.268217 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:09:26 crc kubenswrapper[4872]: I0127 07:09:26.832105 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r"] Jan 27 07:09:27 crc kubenswrapper[4872]: I0127 07:09:27.036067 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" event={"ID":"2e29beed-1f67-4af9-bd04-26453b885725","Type":"ContainerStarted","Data":"1433fe35ec4f693a2f93a27fb72e983443154cdbdd5c47962b09ed4c1f7e6b4f"} Jan 27 07:09:27 crc kubenswrapper[4872]: I0127 07:09:27.037046 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" Jan 27 07:09:27 crc kubenswrapper[4872]: I0127 07:09:27.039020 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" event={"ID":"91a0860c-c480-41da-8adb-b3c902e61b32","Type":"ContainerStarted","Data":"60af79dc536e58468dc7ad44044adf843504c520f8b7c6815e0757a5678469d3"} Jan 27 07:09:27 crc kubenswrapper[4872]: I0127 07:09:27.039169 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" Jan 27 07:09:27 crc kubenswrapper[4872]: I0127 07:09:27.040424 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4" event={"ID":"7171bbfc-1ce9-4f32-9005-64f4b355756a","Type":"ContainerStarted","Data":"9672fad19c7b6e2d54a18d5acecb0f1292346b88df65ea90072d2394b24bbabe"} Jan 27 07:09:27 crc kubenswrapper[4872]: I0127 07:09:27.040599 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4" Jan 27 07:09:27 crc kubenswrapper[4872]: I0127 07:09:27.041400 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" event={"ID":"357cea45-02a1-4431-b6b2-098316ed0c41","Type":"ContainerStarted","Data":"e2f36789cffbf288ad2cb435af72ffa74f867f63e27d4acdd63776cadfbef967"} Jan 27 07:09:27 crc kubenswrapper[4872]: I0127 07:09:27.043081 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerStarted","Data":"4a0bbd673e22866101d108b8afff2324f2f3a4a3901f3a1131db6e0b5e06bc9b"} Jan 27 07:09:27 crc kubenswrapper[4872]: I0127 07:09:27.060696 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" podStartSLOduration=30.705560617 podStartE2EDuration="34.060676317s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:09:23.019288163 +0000 UTC m=+939.546763359" lastFinishedPulling="2026-01-27 07:09:26.374403863 +0000 UTC m=+942.901879059" observedRunningTime="2026-01-27 07:09:27.056997647 +0000 UTC m=+943.584472843" watchObservedRunningTime="2026-01-27 07:09:27.060676317 +0000 UTC m=+943.588151513" Jan 27 07:09:27 crc kubenswrapper[4872]: I0127 07:09:27.078620 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" podStartSLOduration=30.790753752 podStartE2EDuration="34.078601224s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:09:23.082094269 +0000 UTC m=+939.609569465" lastFinishedPulling="2026-01-27 07:09:26.369941741 +0000 UTC m=+942.897416937" observedRunningTime="2026-01-27 07:09:27.077333889 +0000 UTC m=+943.604809085" watchObservedRunningTime="2026-01-27 07:09:27.078601224 +0000 UTC m=+943.606076410" Jan 27 07:09:27 crc kubenswrapper[4872]: I0127 07:09:27.138487 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4" podStartSLOduration=3.145214563 podStartE2EDuration="34.138468399s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:55.540509553 +0000 UTC m=+912.067984749" lastFinishedPulling="2026-01-27 07:09:26.533763389 +0000 UTC m=+943.061238585" observedRunningTime="2026-01-27 07:09:27.10683016 +0000 UTC m=+943.634305356" watchObservedRunningTime="2026-01-27 07:09:27.138468399 +0000 UTC m=+943.665943595" Jan 27 07:09:28 crc kubenswrapper[4872]: I0127 07:09:28.052621 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" event={"ID":"357cea45-02a1-4431-b6b2-098316ed0c41","Type":"ContainerStarted","Data":"03817fa76ea59ef8950ab135dc4e351979812b77728f6f51e999f193a34cdf1d"} Jan 27 07:09:28 crc kubenswrapper[4872]: I0127 07:09:28.053353 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:09:28 crc kubenswrapper[4872]: I0127 07:09:28.087860 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" podStartSLOduration=35.087827505999996 podStartE2EDuration="35.087827506s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:09:28.083875819 +0000 UTC m=+944.611351025" watchObservedRunningTime="2026-01-27 07:09:28.087827506 +0000 UTC m=+944.615302702" Jan 27 07:09:31 crc kubenswrapper[4872]: I0127 07:09:31.085105 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-q2bpc" event={"ID":"ee054c5f-dff0-4c0f-84e1-1aca8b17b9e9","Type":"ContainerStarted","Data":"2ddfc6bc8f2a0661b9370a6f525f6cd81f4316ee0a0f72070d572c48e3d406a5"} Jan 27 07:09:31 crc kubenswrapper[4872]: I0127 07:09:31.087755 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5pnr4" event={"ID":"5bb731bb-f2e5-4022-b291-40158a3e9195","Type":"ContainerStarted","Data":"b989aedb7d051933873987a2374227c730009858cffe4a443c6c3c4027edc4fc"} Jan 27 07:09:31 crc kubenswrapper[4872]: I0127 07:09:31.088627 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5pnr4" Jan 27 07:09:31 crc kubenswrapper[4872]: I0127 07:09:31.090196 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-679885cc9c-4ks2c" event={"ID":"b73bd36b-309b-4692-94d3-80de8061ee1c","Type":"ContainerStarted","Data":"f987c3d1fb73dd9b30e8787fc6092492106f24dc2c667c3c5aa6fbe40a727b8b"} Jan 27 07:09:31 crc kubenswrapper[4872]: I0127 07:09:31.090572 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-679885cc9c-4ks2c" Jan 27 07:09:31 crc kubenswrapper[4872]: I0127 07:09:31.092638 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-r8hvq" event={"ID":"dc917795-9a7c-4904-97f6-504995bb037f","Type":"ContainerStarted","Data":"bfce688a54602106c2eaf7f13f9fca555495957ade00d1acde300a4280dd1bda"} Jan 27 07:09:31 crc kubenswrapper[4872]: I0127 07:09:31.093188 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-r8hvq" Jan 27 07:09:31 crc kubenswrapper[4872]: I0127 07:09:31.111209 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5pnr4" podStartSLOduration=2.990396709 podStartE2EDuration="38.111190728s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:55.528739863 +0000 UTC m=+912.056215059" lastFinishedPulling="2026-01-27 07:09:30.649533882 +0000 UTC m=+947.177009078" observedRunningTime="2026-01-27 07:09:31.108524275 +0000 UTC m=+947.635999471" watchObservedRunningTime="2026-01-27 07:09:31.111190728 +0000 UTC m=+947.638665924" Jan 27 07:09:31 crc kubenswrapper[4872]: I0127 07:09:31.141763 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-679885cc9c-4ks2c" podStartSLOduration=3.355691485 podStartE2EDuration="39.141744637s" podCreationTimestamp="2026-01-27 07:08:52 +0000 UTC" firstStartedPulling="2026-01-27 07:08:54.861293081 +0000 UTC m=+911.388768277" lastFinishedPulling="2026-01-27 07:09:30.647346233 +0000 UTC m=+947.174821429" observedRunningTime="2026-01-27 07:09:31.135945829 +0000 UTC m=+947.663421045" watchObservedRunningTime="2026-01-27 07:09:31.141744637 +0000 UTC m=+947.669219833" Jan 27 07:09:31 crc kubenswrapper[4872]: I0127 07:09:31.153437 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-r8hvq" podStartSLOduration=2.798426635 podStartE2EDuration="38.153420864s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:55.138346062 +0000 UTC m=+911.665821258" lastFinishedPulling="2026-01-27 07:09:30.493340291 +0000 UTC m=+947.020815487" observedRunningTime="2026-01-27 07:09:31.146360982 +0000 UTC m=+947.673836178" watchObservedRunningTime="2026-01-27 07:09:31.153420864 +0000 UTC m=+947.680896060" Jan 27 07:09:32 crc kubenswrapper[4872]: I0127 07:09:32.108453 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-znhq5" event={"ID":"d6e3600b-d99a-47f3-abe9-e130e507eba6","Type":"ContainerStarted","Data":"160a2f9758e11ef155a3cc2466c16ba2545b82e6c66f7f291292e9e68d0bc334"} Jan 27 07:09:32 crc kubenswrapper[4872]: I0127 07:09:32.108751 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-q2bpc" Jan 27 07:09:32 crc kubenswrapper[4872]: I0127 07:09:32.109000 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-znhq5" Jan 27 07:09:32 crc kubenswrapper[4872]: I0127 07:09:32.130866 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-q2bpc" podStartSLOduration=4.00880736 podStartE2EDuration="39.130836463s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:55.531047516 +0000 UTC m=+912.058522712" lastFinishedPulling="2026-01-27 07:09:30.653076619 +0000 UTC m=+947.180551815" observedRunningTime="2026-01-27 07:09:32.126896576 +0000 UTC m=+948.654371772" watchObservedRunningTime="2026-01-27 07:09:32.130836463 +0000 UTC m=+948.658311659" Jan 27 07:09:32 crc kubenswrapper[4872]: I0127 07:09:32.146413 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-znhq5" podStartSLOduration=3.495941224 podStartE2EDuration="39.146397435s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:55.138657822 +0000 UTC m=+911.666133018" lastFinishedPulling="2026-01-27 07:09:30.789114033 +0000 UTC m=+947.316589229" observedRunningTime="2026-01-27 07:09:32.146298273 +0000 UTC m=+948.673773469" watchObservedRunningTime="2026-01-27 07:09:32.146397435 +0000 UTC m=+948.673872631" Jan 27 07:09:33 crc kubenswrapper[4872]: I0127 07:09:33.113834 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-s28bs" event={"ID":"0e770853-8cff-46d8-88a6-4b879002bb47","Type":"ContainerStarted","Data":"b6a7b8007ac0067c3b58b7f8291d63944bd1fdac75f3a7ba840b0bbe1b77f198"} Jan 27 07:09:33 crc kubenswrapper[4872]: I0127 07:09:33.114784 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-s28bs" Jan 27 07:09:33 crc kubenswrapper[4872]: I0127 07:09:33.358900 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-mvrf5" Jan 27 07:09:33 crc kubenswrapper[4872]: I0127 07:09:33.380185 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-s28bs" podStartSLOduration=3.005539308 podStartE2EDuration="40.380165145s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:55.230248338 +0000 UTC m=+911.757723534" lastFinishedPulling="2026-01-27 07:09:32.604874175 +0000 UTC m=+949.132349371" observedRunningTime="2026-01-27 07:09:33.141246969 +0000 UTC m=+949.668722175" watchObservedRunningTime="2026-01-27 07:09:33.380165145 +0000 UTC m=+949.907640341" Jan 27 07:09:33 crc kubenswrapper[4872]: I0127 07:09:33.400761 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4l5gt" Jan 27 07:09:33 crc kubenswrapper[4872]: I0127 07:09:33.422810 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-cqrlm" Jan 27 07:09:33 crc kubenswrapper[4872]: I0127 07:09:33.444226 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-w4rmt" Jan 27 07:09:33 crc kubenswrapper[4872]: I0127 07:09:33.525966 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-vhgwq" Jan 27 07:09:33 crc kubenswrapper[4872]: I0127 07:09:33.671582 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-5qbml" Jan 27 07:09:33 crc kubenswrapper[4872]: I0127 07:09:33.870443 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4" Jan 27 07:09:34 crc kubenswrapper[4872]: E0127 07:09:34.106880 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sw8lg" podUID="43a2af22-fbbc-40ad-ad0b-f787adca58a8" Jan 27 07:09:34 crc kubenswrapper[4872]: I0127 07:09:34.164111 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jdjg2" Jan 27 07:09:34 crc kubenswrapper[4872]: I0127 07:09:34.242239 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-db4rp" Jan 27 07:09:34 crc kubenswrapper[4872]: I0127 07:09:34.257719 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-7zrdn" Jan 27 07:09:34 crc kubenswrapper[4872]: I0127 07:09:34.344548 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qkvf8" Jan 27 07:09:34 crc kubenswrapper[4872]: I0127 07:09:34.354261 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-v4fmr" Jan 27 07:09:35 crc kubenswrapper[4872]: I0127 07:09:35.127940 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-ztfzb" event={"ID":"a6541aa0-da81-49ed-be52-db0df5f2199f","Type":"ContainerStarted","Data":"f11ae5094e87343556efbdcf7095ca890e63cb5f7a8266141a5cba9578fc6e29"} Jan 27 07:09:35 crc kubenswrapper[4872]: I0127 07:09:35.128700 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-ztfzb" Jan 27 07:09:35 crc kubenswrapper[4872]: I0127 07:09:35.144364 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-ztfzb" podStartSLOduration=3.208573151 podStartE2EDuration="42.144344687s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:55.566697133 +0000 UTC m=+912.094172329" lastFinishedPulling="2026-01-27 07:09:34.502468669 +0000 UTC m=+951.029943865" observedRunningTime="2026-01-27 07:09:35.140227215 +0000 UTC m=+951.667702431" watchObservedRunningTime="2026-01-27 07:09:35.144344687 +0000 UTC m=+951.671819873" Jan 27 07:09:36 crc kubenswrapper[4872]: I0127 07:09:36.274580 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-58b6ccd9d-xft2r" Jan 27 07:09:39 crc kubenswrapper[4872]: I0127 07:09:39.472495 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-w76sk" Jan 27 07:09:39 crc kubenswrapper[4872]: I0127 07:09:39.775078 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854tk59h" Jan 27 07:09:43 crc kubenswrapper[4872]: I0127 07:09:43.335487 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-679885cc9c-4ks2c" Jan 27 07:09:43 crc kubenswrapper[4872]: I0127 07:09:43.612331 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-s28bs" Jan 27 07:09:43 crc kubenswrapper[4872]: I0127 07:09:43.785138 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-r8hvq" Jan 27 07:09:43 crc kubenswrapper[4872]: I0127 07:09:43.846110 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-znhq5" Jan 27 07:09:44 crc kubenswrapper[4872]: I0127 07:09:44.006981 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-ztfzb" Jan 27 07:09:44 crc kubenswrapper[4872]: I0127 07:09:44.160659 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-q2bpc" Jan 27 07:09:44 crc kubenswrapper[4872]: I0127 07:09:44.228214 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5pnr4" Jan 27 07:09:46 crc kubenswrapper[4872]: I0127 07:09:46.102239 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 07:09:47 crc kubenswrapper[4872]: I0127 07:09:47.203287 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sw8lg" event={"ID":"43a2af22-fbbc-40ad-ad0b-f787adca58a8","Type":"ContainerStarted","Data":"0531e3e0970777307614f36337cf65c465b787ed452bd2626fb92e4758ac09cd"} Jan 27 07:10:40 crc kubenswrapper[4872]: I0127 07:10:40.090711 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sw8lg" podStartSLOduration=56.015917677 podStartE2EDuration="1m47.090696353s" podCreationTimestamp="2026-01-27 07:08:53 +0000 UTC" firstStartedPulling="2026-01-27 07:08:55.726744669 +0000 UTC m=+912.254219875" lastFinishedPulling="2026-01-27 07:09:46.801523355 +0000 UTC m=+963.328998551" observedRunningTime="2026-01-27 07:09:47.219236946 +0000 UTC m=+963.746712152" watchObservedRunningTime="2026-01-27 07:10:40.090696353 +0000 UTC m=+1016.618171549" Jan 27 07:10:40 crc kubenswrapper[4872]: I0127 07:10:40.092984 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cdzpw/must-gather-mlsv6"] Jan 27 07:10:40 crc kubenswrapper[4872]: I0127 07:10:40.094086 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdzpw/must-gather-mlsv6" Jan 27 07:10:40 crc kubenswrapper[4872]: I0127 07:10:40.103536 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-cdzpw"/"default-dockercfg-5fckv" Jan 27 07:10:40 crc kubenswrapper[4872]: I0127 07:10:40.103784 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cdzpw"/"kube-root-ca.crt" Jan 27 07:10:40 crc kubenswrapper[4872]: I0127 07:10:40.108019 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cdzpw"/"openshift-service-ca.crt" Jan 27 07:10:40 crc kubenswrapper[4872]: I0127 07:10:40.134105 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cdzpw/must-gather-mlsv6"] Jan 27 07:10:40 crc kubenswrapper[4872]: I0127 07:10:40.236617 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/45b8df7a-1a63-4734-a838-b8f07b10f21c-must-gather-output\") pod \"must-gather-mlsv6\" (UID: \"45b8df7a-1a63-4734-a838-b8f07b10f21c\") " pod="openshift-must-gather-cdzpw/must-gather-mlsv6" Jan 27 07:10:40 crc kubenswrapper[4872]: I0127 07:10:40.236728 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mpkg\" (UniqueName: \"kubernetes.io/projected/45b8df7a-1a63-4734-a838-b8f07b10f21c-kube-api-access-6mpkg\") pod \"must-gather-mlsv6\" (UID: \"45b8df7a-1a63-4734-a838-b8f07b10f21c\") " pod="openshift-must-gather-cdzpw/must-gather-mlsv6" Jan 27 07:10:40 crc kubenswrapper[4872]: I0127 07:10:40.337793 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/45b8df7a-1a63-4734-a838-b8f07b10f21c-must-gather-output\") pod \"must-gather-mlsv6\" (UID: \"45b8df7a-1a63-4734-a838-b8f07b10f21c\") " pod="openshift-must-gather-cdzpw/must-gather-mlsv6" Jan 27 07:10:40 crc kubenswrapper[4872]: I0127 07:10:40.337894 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mpkg\" (UniqueName: \"kubernetes.io/projected/45b8df7a-1a63-4734-a838-b8f07b10f21c-kube-api-access-6mpkg\") pod \"must-gather-mlsv6\" (UID: \"45b8df7a-1a63-4734-a838-b8f07b10f21c\") " pod="openshift-must-gather-cdzpw/must-gather-mlsv6" Jan 27 07:10:40 crc kubenswrapper[4872]: I0127 07:10:40.338291 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/45b8df7a-1a63-4734-a838-b8f07b10f21c-must-gather-output\") pod \"must-gather-mlsv6\" (UID: \"45b8df7a-1a63-4734-a838-b8f07b10f21c\") " pod="openshift-must-gather-cdzpw/must-gather-mlsv6" Jan 27 07:10:40 crc kubenswrapper[4872]: I0127 07:10:40.361145 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mpkg\" (UniqueName: \"kubernetes.io/projected/45b8df7a-1a63-4734-a838-b8f07b10f21c-kube-api-access-6mpkg\") pod \"must-gather-mlsv6\" (UID: \"45b8df7a-1a63-4734-a838-b8f07b10f21c\") " pod="openshift-must-gather-cdzpw/must-gather-mlsv6" Jan 27 07:10:40 crc kubenswrapper[4872]: I0127 07:10:40.413411 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdzpw/must-gather-mlsv6" Jan 27 07:10:40 crc kubenswrapper[4872]: I0127 07:10:40.678713 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cdzpw/must-gather-mlsv6"] Jan 27 07:10:41 crc kubenswrapper[4872]: I0127 07:10:41.594323 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cdzpw/must-gather-mlsv6" event={"ID":"45b8df7a-1a63-4734-a838-b8f07b10f21c","Type":"ContainerStarted","Data":"598af290159916ca60d986a14e848a8d8fc3ec45fa24e0d9b069a6dfac59a4d0"} Jan 27 07:10:48 crc kubenswrapper[4872]: I0127 07:10:48.661670 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cdzpw/must-gather-mlsv6" event={"ID":"45b8df7a-1a63-4734-a838-b8f07b10f21c","Type":"ContainerStarted","Data":"fdf32f8e79962119ae50accc53e8c9cdcc4d5f29e42eeb5797b3edfb93bc8bfc"} Jan 27 07:10:48 crc kubenswrapper[4872]: I0127 07:10:48.662274 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cdzpw/must-gather-mlsv6" event={"ID":"45b8df7a-1a63-4734-a838-b8f07b10f21c","Type":"ContainerStarted","Data":"8700cff0023603259051ff559b44d3d88435a66490a7ad652c53e64a17fc1c8a"} Jan 27 07:10:48 crc kubenswrapper[4872]: I0127 07:10:48.684509 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cdzpw/must-gather-mlsv6" podStartSLOduration=1.111581026 podStartE2EDuration="8.68449112s" podCreationTimestamp="2026-01-27 07:10:40 +0000 UTC" firstStartedPulling="2026-01-27 07:10:40.688467082 +0000 UTC m=+1017.215942278" lastFinishedPulling="2026-01-27 07:10:48.261377176 +0000 UTC m=+1024.788852372" observedRunningTime="2026-01-27 07:10:48.681690163 +0000 UTC m=+1025.209165379" watchObservedRunningTime="2026-01-27 07:10:48.68449112 +0000 UTC m=+1025.211966316" Jan 27 07:11:48 crc kubenswrapper[4872]: I0127 07:11:48.169934 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4_c81f0fb7-1778-4ff0-9a01-a10290da0be6/util/0.log" Jan 27 07:11:48 crc kubenswrapper[4872]: I0127 07:11:48.311385 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4_c81f0fb7-1778-4ff0-9a01-a10290da0be6/util/0.log" Jan 27 07:11:48 crc kubenswrapper[4872]: I0127 07:11:48.421283 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4_c81f0fb7-1778-4ff0-9a01-a10290da0be6/pull/0.log" Jan 27 07:11:48 crc kubenswrapper[4872]: I0127 07:11:48.484137 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4_c81f0fb7-1778-4ff0-9a01-a10290da0be6/pull/0.log" Jan 27 07:11:48 crc kubenswrapper[4872]: I0127 07:11:48.612480 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4_c81f0fb7-1778-4ff0-9a01-a10290da0be6/util/0.log" Jan 27 07:11:48 crc kubenswrapper[4872]: I0127 07:11:48.636992 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4_c81f0fb7-1778-4ff0-9a01-a10290da0be6/pull/0.log" Jan 27 07:11:48 crc kubenswrapper[4872]: I0127 07:11:48.892713 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9ec6e541feeaeee71c1c7b94483acf217beec16be02a71d35e4d0f1bdcdfqk4_c81f0fb7-1778-4ff0-9a01-a10290da0be6/extract/0.log" Jan 27 07:11:48 crc kubenswrapper[4872]: I0127 07:11:48.991526 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-679885cc9c-4ks2c_b73bd36b-309b-4692-94d3-80de8061ee1c/manager/0.log" Jan 27 07:11:49 crc kubenswrapper[4872]: I0127 07:11:49.094251 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-mvrf5_08fccf6d-9f03-4348-968b-8a0db7e49d41/manager/0.log" Jan 27 07:11:49 crc kubenswrapper[4872]: I0127 07:11:49.177055 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-4l5gt_40a630f8-5dec-444b-aca1-73e4b2b76e40/manager/0.log" Jan 27 07:11:49 crc kubenswrapper[4872]: I0127 07:11:49.313452 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-cqrlm_0d14310b-b74d-4d74-9c25-32dc8ac6f8c3/manager/0.log" Jan 27 07:11:49 crc kubenswrapper[4872]: I0127 07:11:49.446692 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-74866cc64d-w4rmt_021fb455-7c8a-4fac-8c26-2a24c605f4e0/manager/0.log" Jan 27 07:11:49 crc kubenswrapper[4872]: I0127 07:11:49.503212 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-vhgwq_4c623fe5-f07e-41cf-be56-97a412385609/manager/0.log" Jan 27 07:11:49 crc kubenswrapper[4872]: I0127 07:11:49.672366 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-w76sk_2e29beed-1f67-4af9-bd04-26453b885725/manager/0.log" Jan 27 07:11:49 crc kubenswrapper[4872]: I0127 07:11:49.815897 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-s28bs_0e770853-8cff-46d8-88a6-4b879002bb47/manager/0.log" Jan 27 07:11:49 crc kubenswrapper[4872]: I0127 07:11:49.922913 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-5qbml_37ddb430-0905-4430-990c-1a9122983760/manager/0.log" Jan 27 07:11:50 crc kubenswrapper[4872]: I0127 07:11:50.113012 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-r8hvq_dc917795-9a7c-4904-97f6-504995bb037f/manager/0.log" Jan 27 07:11:50 crc kubenswrapper[4872]: I0127 07:11:50.133902 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-m4kh4_7171bbfc-1ce9-4f32-9005-64f4b355756a/manager/0.log" Jan 27 07:11:50 crc kubenswrapper[4872]: I0127 07:11:50.294891 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-znhq5_d6e3600b-d99a-47f3-abe9-e130e507eba6/manager/0.log" Jan 27 07:11:50 crc kubenswrapper[4872]: I0127 07:11:50.369793 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7f54b7d6d4-ztfzb_a6541aa0-da81-49ed-be52-db0df5f2199f/manager/0.log" Jan 27 07:11:50 crc kubenswrapper[4872]: I0127 07:11:50.540487 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7875d7675-q2bpc_ee054c5f-dff0-4c0f-84e1-1aca8b17b9e9/manager/0.log" Jan 27 07:11:50 crc kubenswrapper[4872]: I0127 07:11:50.582640 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854tk59h_91a0860c-c480-41da-8adb-b3c902e61b32/manager/0.log" Jan 27 07:11:50 crc kubenswrapper[4872]: I0127 07:11:50.812587 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5d9c9b4d7f-w29fp_05798121-4935-462c-96be-fb6c33c72471/operator/0.log" Jan 27 07:11:50 crc kubenswrapper[4872]: I0127 07:11:50.861885 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58b6ccd9d-xft2r_357cea45-02a1-4431-b6b2-098316ed0c41/manager/0.log" Jan 27 07:11:51 crc kubenswrapper[4872]: I0127 07:11:51.093068 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-jdjg2_5d51e7b0-0af8-4529-8669-208e071a494a/manager/0.log" Jan 27 07:11:51 crc kubenswrapper[4872]: I0127 07:11:51.102354 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cp2jp_bc2aaa8d-dea7-4dd7-9efd-d4260dcda527/registry-server/0.log" Jan 27 07:11:51 crc kubenswrapper[4872]: I0127 07:11:51.301241 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-5pnr4_5bb731bb-f2e5-4022-b291-40158a3e9195/manager/0.log" Jan 27 07:11:51 crc kubenswrapper[4872]: I0127 07:11:51.327970 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-sw8lg_43a2af22-fbbc-40ad-ad0b-f787adca58a8/operator/0.log" Jan 27 07:11:51 crc kubenswrapper[4872]: I0127 07:11:51.535827 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-db4rp_1123a944-c0b8-4b09-a177-bdb16206d1fc/manager/0.log" Jan 27 07:11:51 crc kubenswrapper[4872]: I0127 07:11:51.607859 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-7zrdn_07268bac-40d7-4b33-be2b-51b3664ef473/manager/0.log" Jan 27 07:11:51 crc kubenswrapper[4872]: I0127 07:11:51.795823 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-qkvf8_ab06f84a-a334-4a56-87da-f9ffe0026051/manager/0.log" Jan 27 07:11:51 crc kubenswrapper[4872]: I0127 07:11:51.843502 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75db85654f-v4fmr_5a0744a4-d705-4fce-9d51-0646103fd458/manager/0.log" Jan 27 07:11:55 crc kubenswrapper[4872]: I0127 07:11:55.001697 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:11:55 crc kubenswrapper[4872]: I0127 07:11:55.002032 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:12:11 crc kubenswrapper[4872]: I0127 07:12:11.448060 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dx5vk_aa0d3672-506e-417a-8528-ba9e7ea8a2ba/control-plane-machine-set-operator/0.log" Jan 27 07:12:11 crc kubenswrapper[4872]: I0127 07:12:11.635921 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tktwd_d579a88b-344f-4198-8070-bc7a7b73bbaf/kube-rbac-proxy/0.log" Jan 27 07:12:11 crc kubenswrapper[4872]: I0127 07:12:11.724175 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tktwd_d579a88b-344f-4198-8070-bc7a7b73bbaf/machine-api-operator/0.log" Jan 27 07:12:24 crc kubenswrapper[4872]: I0127 07:12:24.053076 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2bbbk_8b332b36-0066-48c8-a411-12cfe9eefdae/cert-manager-controller/0.log" Jan 27 07:12:24 crc kubenswrapper[4872]: I0127 07:12:24.240233 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-pvs6b_0f5348ad-9399-4799-96c9-972a71d03900/cert-manager-cainjector/0.log" Jan 27 07:12:24 crc kubenswrapper[4872]: I0127 07:12:24.351901 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-l6kbm_9f718d42-d842-42f7-bafc-a10d960c3555/cert-manager-webhook/0.log" Jan 27 07:12:25 crc kubenswrapper[4872]: I0127 07:12:25.001916 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:12:25 crc kubenswrapper[4872]: I0127 07:12:25.001982 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:12:37 crc kubenswrapper[4872]: I0127 07:12:37.654344 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-q5l6z_1258b368-7cb6-4e54-9def-f4e379f44f4d/nmstate-console-plugin/0.log" Jan 27 07:12:37 crc kubenswrapper[4872]: I0127 07:12:37.904366 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wcbvw_a9bd09ec-0fa1-4d66-9df7-86950125ea55/nmstate-handler/0.log" Jan 27 07:12:37 crc kubenswrapper[4872]: I0127 07:12:37.940159 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-lgvbb_630c7ad6-ffad-412c-8e81-c674d0a64558/kube-rbac-proxy/0.log" Jan 27 07:12:38 crc kubenswrapper[4872]: I0127 07:12:38.042315 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-lgvbb_630c7ad6-ffad-412c-8e81-c674d0a64558/nmstate-metrics/0.log" Jan 27 07:12:38 crc kubenswrapper[4872]: I0127 07:12:38.077837 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-j8rtk_e1b87711-10f9-4c88-966a-6229cf79f03a/nmstate-operator/0.log" Jan 27 07:12:38 crc kubenswrapper[4872]: I0127 07:12:38.243272 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-f4ltq_dc5cf68b-3227-4b9d-aaf9-4562e622e0a0/nmstate-webhook/0.log" Jan 27 07:12:55 crc kubenswrapper[4872]: I0127 07:12:55.001735 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:12:55 crc kubenswrapper[4872]: I0127 07:12:55.002249 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:12:55 crc kubenswrapper[4872]: I0127 07:12:55.002292 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 07:12:55 crc kubenswrapper[4872]: I0127 07:12:55.002813 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4a0bbd673e22866101d108b8afff2324f2f3a4a3901f3a1131db6e0b5e06bc9b"} pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:12:55 crc kubenswrapper[4872]: I0127 07:12:55.002955 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" containerID="cri-o://4a0bbd673e22866101d108b8afff2324f2f3a4a3901f3a1131db6e0b5e06bc9b" gracePeriod=600 Jan 27 07:12:55 crc kubenswrapper[4872]: I0127 07:12:55.411050 4872 generic.go:334] "Generic (PLEG): container finished" podID="5ea42312-a362-48cd-8387-34c060df18a1" containerID="4a0bbd673e22866101d108b8afff2324f2f3a4a3901f3a1131db6e0b5e06bc9b" exitCode=0 Jan 27 07:12:55 crc kubenswrapper[4872]: I0127 07:12:55.411090 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerDied","Data":"4a0bbd673e22866101d108b8afff2324f2f3a4a3901f3a1131db6e0b5e06bc9b"} Jan 27 07:12:55 crc kubenswrapper[4872]: I0127 07:12:55.411115 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerStarted","Data":"aa5d70964ed817705de4b0d0be7485d7dbadb35ad0952fe0f6c6da8a41ab3353"} Jan 27 07:12:55 crc kubenswrapper[4872]: I0127 07:12:55.411132 4872 scope.go:117] "RemoveContainer" containerID="20143d97a66f83fcd247e09e3531e769d9734a19dc520edc93aa6f0111b588b3" Jan 27 07:13:02 crc kubenswrapper[4872]: I0127 07:13:02.443066 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7tfw6_06c5e81b-5f58-4c7a-abec-38e071749a33/kube-rbac-proxy/0.log" Jan 27 07:13:02 crc kubenswrapper[4872]: I0127 07:13:02.491565 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7tfw6_06c5e81b-5f58-4c7a-abec-38e071749a33/controller/0.log" Jan 27 07:13:02 crc kubenswrapper[4872]: I0127 07:13:02.648571 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/cp-frr-files/0.log" Jan 27 07:13:02 crc kubenswrapper[4872]: I0127 07:13:02.795544 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/cp-metrics/0.log" Jan 27 07:13:02 crc kubenswrapper[4872]: I0127 07:13:02.800043 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/cp-reloader/0.log" Jan 27 07:13:02 crc kubenswrapper[4872]: I0127 07:13:02.827711 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/cp-frr-files/0.log" Jan 27 07:13:02 crc kubenswrapper[4872]: I0127 07:13:02.869590 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/cp-reloader/0.log" Jan 27 07:13:03 crc kubenswrapper[4872]: I0127 07:13:03.031320 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/cp-metrics/0.log" Jan 27 07:13:03 crc kubenswrapper[4872]: I0127 07:13:03.042588 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/cp-frr-files/0.log" Jan 27 07:13:03 crc kubenswrapper[4872]: I0127 07:13:03.060566 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/cp-reloader/0.log" Jan 27 07:13:03 crc kubenswrapper[4872]: I0127 07:13:03.087244 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/cp-metrics/0.log" Jan 27 07:13:03 crc kubenswrapper[4872]: I0127 07:13:03.198511 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/cp-frr-files/0.log" Jan 27 07:13:03 crc kubenswrapper[4872]: I0127 07:13:03.224960 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/cp-reloader/0.log" Jan 27 07:13:03 crc kubenswrapper[4872]: I0127 07:13:03.253584 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/cp-metrics/0.log" Jan 27 07:13:03 crc kubenswrapper[4872]: I0127 07:13:03.343725 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/controller/0.log" Jan 27 07:13:03 crc kubenswrapper[4872]: I0127 07:13:03.404960 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/frr-metrics/0.log" Jan 27 07:13:03 crc kubenswrapper[4872]: I0127 07:13:03.474810 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/kube-rbac-proxy/0.log" Jan 27 07:13:03 crc kubenswrapper[4872]: I0127 07:13:03.496509 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/frr/0.log" Jan 27 07:13:03 crc kubenswrapper[4872]: I0127 07:13:03.561444 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/kube-rbac-proxy-frr/0.log" Jan 27 07:13:03 crc kubenswrapper[4872]: I0127 07:13:03.622050 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4wv5f_82869486-1dcb-488a-9c92-3faf8b96df23/reloader/0.log" Jan 27 07:13:03 crc kubenswrapper[4872]: I0127 07:13:03.700749 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-q8sn8_b4f2e3f7-8beb-490a-a856-5b2be541e665/frr-k8s-webhook-server/0.log" Jan 27 07:13:03 crc kubenswrapper[4872]: I0127 07:13:03.854589 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-db7b997f7-nfdqt_92a2f2b1-53a8-414f-9530-b7d01f84a6a9/manager/0.log" Jan 27 07:13:03 crc kubenswrapper[4872]: I0127 07:13:03.930403 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-78cd4d8b5b-t5s7b_4c59ffc3-ca7a-41d4-885e-dfb0af134941/webhook-server/0.log" Jan 27 07:13:04 crc kubenswrapper[4872]: I0127 07:13:04.050755 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zkp6s_1c33aef4-e024-4cf6-9abc-795cd0b04475/kube-rbac-proxy/0.log" Jan 27 07:13:04 crc kubenswrapper[4872]: I0127 07:13:04.175075 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zkp6s_1c33aef4-e024-4cf6-9abc-795cd0b04475/speaker/0.log" Jan 27 07:13:15 crc kubenswrapper[4872]: I0127 07:13:15.960175 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9_d7c87876-5377-48d5-9648-b761334f75c7/util/0.log" Jan 27 07:13:16 crc kubenswrapper[4872]: I0127 07:13:16.160700 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9_d7c87876-5377-48d5-9648-b761334f75c7/pull/0.log" Jan 27 07:13:16 crc kubenswrapper[4872]: I0127 07:13:16.170757 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9_d7c87876-5377-48d5-9648-b761334f75c7/pull/0.log" Jan 27 07:13:16 crc kubenswrapper[4872]: I0127 07:13:16.195268 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9_d7c87876-5377-48d5-9648-b761334f75c7/util/0.log" Jan 27 07:13:16 crc kubenswrapper[4872]: I0127 07:13:16.343070 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9_d7c87876-5377-48d5-9648-b761334f75c7/util/0.log" Jan 27 07:13:16 crc kubenswrapper[4872]: I0127 07:13:16.370939 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9_d7c87876-5377-48d5-9648-b761334f75c7/extract/0.log" Jan 27 07:13:16 crc kubenswrapper[4872]: I0127 07:13:16.371002 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcthnl9_d7c87876-5377-48d5-9648-b761334f75c7/pull/0.log" Jan 27 07:13:16 crc kubenswrapper[4872]: I0127 07:13:16.555909 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg_59de723c-44ad-4f00-b0d8-e04fee093418/util/0.log" Jan 27 07:13:16 crc kubenswrapper[4872]: I0127 07:13:16.734080 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg_59de723c-44ad-4f00-b0d8-e04fee093418/pull/0.log" Jan 27 07:13:16 crc kubenswrapper[4872]: I0127 07:13:16.748917 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg_59de723c-44ad-4f00-b0d8-e04fee093418/util/0.log" Jan 27 07:13:16 crc kubenswrapper[4872]: I0127 07:13:16.762907 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg_59de723c-44ad-4f00-b0d8-e04fee093418/pull/0.log" Jan 27 07:13:16 crc kubenswrapper[4872]: I0127 07:13:16.883894 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg_59de723c-44ad-4f00-b0d8-e04fee093418/util/0.log" Jan 27 07:13:16 crc kubenswrapper[4872]: I0127 07:13:16.942194 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg_59de723c-44ad-4f00-b0d8-e04fee093418/pull/0.log" Jan 27 07:13:16 crc kubenswrapper[4872]: I0127 07:13:16.943541 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132bkkg_59de723c-44ad-4f00-b0d8-e04fee093418/extract/0.log" Jan 27 07:13:17 crc kubenswrapper[4872]: I0127 07:13:17.074019 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zzwgm_788f4442-73e7-486a-b79c-83560f1c7cc3/extract-utilities/0.log" Jan 27 07:13:17 crc kubenswrapper[4872]: I0127 07:13:17.260147 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zzwgm_788f4442-73e7-486a-b79c-83560f1c7cc3/extract-content/0.log" Jan 27 07:13:17 crc kubenswrapper[4872]: I0127 07:13:17.260927 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zzwgm_788f4442-73e7-486a-b79c-83560f1c7cc3/extract-utilities/0.log" Jan 27 07:13:17 crc kubenswrapper[4872]: I0127 07:13:17.274654 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zzwgm_788f4442-73e7-486a-b79c-83560f1c7cc3/extract-content/0.log" Jan 27 07:13:17 crc kubenswrapper[4872]: I0127 07:13:17.399903 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zzwgm_788f4442-73e7-486a-b79c-83560f1c7cc3/extract-utilities/0.log" Jan 27 07:13:17 crc kubenswrapper[4872]: I0127 07:13:17.495187 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zzwgm_788f4442-73e7-486a-b79c-83560f1c7cc3/extract-content/0.log" Jan 27 07:13:17 crc kubenswrapper[4872]: I0127 07:13:17.630338 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zzwgm_788f4442-73e7-486a-b79c-83560f1c7cc3/registry-server/0.log" Jan 27 07:13:17 crc kubenswrapper[4872]: I0127 07:13:17.647954 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dx8st_81982f4d-c34f-4617-ae87-9021ffbad391/extract-utilities/0.log" Jan 27 07:13:17 crc kubenswrapper[4872]: I0127 07:13:17.790084 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dx8st_81982f4d-c34f-4617-ae87-9021ffbad391/extract-utilities/0.log" Jan 27 07:13:17 crc kubenswrapper[4872]: I0127 07:13:17.803440 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dx8st_81982f4d-c34f-4617-ae87-9021ffbad391/extract-content/0.log" Jan 27 07:13:17 crc kubenswrapper[4872]: I0127 07:13:17.839632 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dx8st_81982f4d-c34f-4617-ae87-9021ffbad391/extract-content/0.log" Jan 27 07:13:17 crc kubenswrapper[4872]: I0127 07:13:17.972744 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dx8st_81982f4d-c34f-4617-ae87-9021ffbad391/extract-utilities/0.log" Jan 27 07:13:18 crc kubenswrapper[4872]: I0127 07:13:18.018713 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dx8st_81982f4d-c34f-4617-ae87-9021ffbad391/extract-content/0.log" Jan 27 07:13:18 crc kubenswrapper[4872]: I0127 07:13:18.269213 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-xc2lz_48d4741c-2022-489f-b068-dfa9ab32498a/marketplace-operator/0.log" Jan 27 07:13:18 crc kubenswrapper[4872]: I0127 07:13:18.283180 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dx8st_81982f4d-c34f-4617-ae87-9021ffbad391/registry-server/0.log" Jan 27 07:13:18 crc kubenswrapper[4872]: I0127 07:13:18.366052 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lb59_dc57058f-3a9e-42fb-ae07-53e246ed8fc2/extract-utilities/0.log" Jan 27 07:13:18 crc kubenswrapper[4872]: I0127 07:13:18.520239 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lb59_dc57058f-3a9e-42fb-ae07-53e246ed8fc2/extract-content/0.log" Jan 27 07:13:18 crc kubenswrapper[4872]: I0127 07:13:18.531814 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lb59_dc57058f-3a9e-42fb-ae07-53e246ed8fc2/extract-utilities/0.log" Jan 27 07:13:18 crc kubenswrapper[4872]: I0127 07:13:18.570920 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lb59_dc57058f-3a9e-42fb-ae07-53e246ed8fc2/extract-content/0.log" Jan 27 07:13:18 crc kubenswrapper[4872]: I0127 07:13:18.750727 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lb59_dc57058f-3a9e-42fb-ae07-53e246ed8fc2/extract-content/0.log" Jan 27 07:13:18 crc kubenswrapper[4872]: I0127 07:13:18.754726 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lb59_dc57058f-3a9e-42fb-ae07-53e246ed8fc2/extract-utilities/0.log" Jan 27 07:13:18 crc kubenswrapper[4872]: I0127 07:13:18.834747 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lb59_dc57058f-3a9e-42fb-ae07-53e246ed8fc2/registry-server/0.log" Jan 27 07:13:18 crc kubenswrapper[4872]: I0127 07:13:18.950042 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sq6g2_059bb67f-38aa-492a-8d62-cfc3d3efc41d/extract-utilities/0.log" Jan 27 07:13:19 crc kubenswrapper[4872]: I0127 07:13:19.158401 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sq6g2_059bb67f-38aa-492a-8d62-cfc3d3efc41d/extract-content/0.log" Jan 27 07:13:19 crc kubenswrapper[4872]: I0127 07:13:19.158468 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sq6g2_059bb67f-38aa-492a-8d62-cfc3d3efc41d/extract-content/0.log" Jan 27 07:13:19 crc kubenswrapper[4872]: I0127 07:13:19.178317 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sq6g2_059bb67f-38aa-492a-8d62-cfc3d3efc41d/extract-utilities/0.log" Jan 27 07:13:19 crc kubenswrapper[4872]: I0127 07:13:19.393857 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sq6g2_059bb67f-38aa-492a-8d62-cfc3d3efc41d/extract-utilities/0.log" Jan 27 07:13:19 crc kubenswrapper[4872]: I0127 07:13:19.399902 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sq6g2_059bb67f-38aa-492a-8d62-cfc3d3efc41d/extract-content/0.log" Jan 27 07:13:19 crc kubenswrapper[4872]: I0127 07:13:19.504445 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sq6g2_059bb67f-38aa-492a-8d62-cfc3d3efc41d/registry-server/0.log" Jan 27 07:14:24 crc kubenswrapper[4872]: I0127 07:14:24.986463 4872 generic.go:334] "Generic (PLEG): container finished" podID="45b8df7a-1a63-4734-a838-b8f07b10f21c" containerID="8700cff0023603259051ff559b44d3d88435a66490a7ad652c53e64a17fc1c8a" exitCode=0 Jan 27 07:14:24 crc kubenswrapper[4872]: I0127 07:14:24.987017 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cdzpw/must-gather-mlsv6" event={"ID":"45b8df7a-1a63-4734-a838-b8f07b10f21c","Type":"ContainerDied","Data":"8700cff0023603259051ff559b44d3d88435a66490a7ad652c53e64a17fc1c8a"} Jan 27 07:14:24 crc kubenswrapper[4872]: I0127 07:14:24.987524 4872 scope.go:117] "RemoveContainer" containerID="8700cff0023603259051ff559b44d3d88435a66490a7ad652c53e64a17fc1c8a" Jan 27 07:14:25 crc kubenswrapper[4872]: I0127 07:14:25.746157 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cdzpw_must-gather-mlsv6_45b8df7a-1a63-4734-a838-b8f07b10f21c/gather/0.log" Jan 27 07:14:33 crc kubenswrapper[4872]: I0127 07:14:33.045418 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cdzpw/must-gather-mlsv6"] Jan 27 07:14:33 crc kubenswrapper[4872]: I0127 07:14:33.046184 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-cdzpw/must-gather-mlsv6" podUID="45b8df7a-1a63-4734-a838-b8f07b10f21c" containerName="copy" containerID="cri-o://fdf32f8e79962119ae50accc53e8c9cdcc4d5f29e42eeb5797b3edfb93bc8bfc" gracePeriod=2 Jan 27 07:14:33 crc kubenswrapper[4872]: I0127 07:14:33.053153 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cdzpw/must-gather-mlsv6"] Jan 27 07:14:33 crc kubenswrapper[4872]: I0127 07:14:33.469729 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cdzpw_must-gather-mlsv6_45b8df7a-1a63-4734-a838-b8f07b10f21c/copy/0.log" Jan 27 07:14:33 crc kubenswrapper[4872]: I0127 07:14:33.470138 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdzpw/must-gather-mlsv6" Jan 27 07:14:33 crc kubenswrapper[4872]: I0127 07:14:33.648540 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/45b8df7a-1a63-4734-a838-b8f07b10f21c-must-gather-output\") pod \"45b8df7a-1a63-4734-a838-b8f07b10f21c\" (UID: \"45b8df7a-1a63-4734-a838-b8f07b10f21c\") " Jan 27 07:14:33 crc kubenswrapper[4872]: I0127 07:14:33.648703 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mpkg\" (UniqueName: \"kubernetes.io/projected/45b8df7a-1a63-4734-a838-b8f07b10f21c-kube-api-access-6mpkg\") pod \"45b8df7a-1a63-4734-a838-b8f07b10f21c\" (UID: \"45b8df7a-1a63-4734-a838-b8f07b10f21c\") " Jan 27 07:14:33 crc kubenswrapper[4872]: I0127 07:14:33.660265 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b8df7a-1a63-4734-a838-b8f07b10f21c-kube-api-access-6mpkg" (OuterVolumeSpecName: "kube-api-access-6mpkg") pod "45b8df7a-1a63-4734-a838-b8f07b10f21c" (UID: "45b8df7a-1a63-4734-a838-b8f07b10f21c"). InnerVolumeSpecName "kube-api-access-6mpkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:14:33 crc kubenswrapper[4872]: I0127 07:14:33.726575 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b8df7a-1a63-4734-a838-b8f07b10f21c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "45b8df7a-1a63-4734-a838-b8f07b10f21c" (UID: "45b8df7a-1a63-4734-a838-b8f07b10f21c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:14:33 crc kubenswrapper[4872]: I0127 07:14:33.750604 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mpkg\" (UniqueName: \"kubernetes.io/projected/45b8df7a-1a63-4734-a838-b8f07b10f21c-kube-api-access-6mpkg\") on node \"crc\" DevicePath \"\"" Jan 27 07:14:33 crc kubenswrapper[4872]: I0127 07:14:33.750642 4872 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/45b8df7a-1a63-4734-a838-b8f07b10f21c-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 07:14:34 crc kubenswrapper[4872]: I0127 07:14:34.050080 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cdzpw_must-gather-mlsv6_45b8df7a-1a63-4734-a838-b8f07b10f21c/copy/0.log" Jan 27 07:14:34 crc kubenswrapper[4872]: I0127 07:14:34.050698 4872 generic.go:334] "Generic (PLEG): container finished" podID="45b8df7a-1a63-4734-a838-b8f07b10f21c" containerID="fdf32f8e79962119ae50accc53e8c9cdcc4d5f29e42eeb5797b3edfb93bc8bfc" exitCode=143 Jan 27 07:14:34 crc kubenswrapper[4872]: I0127 07:14:34.050746 4872 scope.go:117] "RemoveContainer" containerID="fdf32f8e79962119ae50accc53e8c9cdcc4d5f29e42eeb5797b3edfb93bc8bfc" Jan 27 07:14:34 crc kubenswrapper[4872]: I0127 07:14:34.050894 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cdzpw/must-gather-mlsv6" Jan 27 07:14:34 crc kubenswrapper[4872]: I0127 07:14:34.069109 4872 scope.go:117] "RemoveContainer" containerID="8700cff0023603259051ff559b44d3d88435a66490a7ad652c53e64a17fc1c8a" Jan 27 07:14:34 crc kubenswrapper[4872]: I0127 07:14:34.116436 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45b8df7a-1a63-4734-a838-b8f07b10f21c" path="/var/lib/kubelet/pods/45b8df7a-1a63-4734-a838-b8f07b10f21c/volumes" Jan 27 07:14:34 crc kubenswrapper[4872]: I0127 07:14:34.126666 4872 scope.go:117] "RemoveContainer" containerID="fdf32f8e79962119ae50accc53e8c9cdcc4d5f29e42eeb5797b3edfb93bc8bfc" Jan 27 07:14:34 crc kubenswrapper[4872]: E0127 07:14:34.127280 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdf32f8e79962119ae50accc53e8c9cdcc4d5f29e42eeb5797b3edfb93bc8bfc\": container with ID starting with fdf32f8e79962119ae50accc53e8c9cdcc4d5f29e42eeb5797b3edfb93bc8bfc not found: ID does not exist" containerID="fdf32f8e79962119ae50accc53e8c9cdcc4d5f29e42eeb5797b3edfb93bc8bfc" Jan 27 07:14:34 crc kubenswrapper[4872]: I0127 07:14:34.127325 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdf32f8e79962119ae50accc53e8c9cdcc4d5f29e42eeb5797b3edfb93bc8bfc"} err="failed to get container status \"fdf32f8e79962119ae50accc53e8c9cdcc4d5f29e42eeb5797b3edfb93bc8bfc\": rpc error: code = NotFound desc = could not find container \"fdf32f8e79962119ae50accc53e8c9cdcc4d5f29e42eeb5797b3edfb93bc8bfc\": container with ID starting with fdf32f8e79962119ae50accc53e8c9cdcc4d5f29e42eeb5797b3edfb93bc8bfc not found: ID does not exist" Jan 27 07:14:34 crc kubenswrapper[4872]: I0127 07:14:34.127354 4872 scope.go:117] "RemoveContainer" containerID="8700cff0023603259051ff559b44d3d88435a66490a7ad652c53e64a17fc1c8a" Jan 27 07:14:34 crc kubenswrapper[4872]: E0127 07:14:34.127711 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8700cff0023603259051ff559b44d3d88435a66490a7ad652c53e64a17fc1c8a\": container with ID starting with 8700cff0023603259051ff559b44d3d88435a66490a7ad652c53e64a17fc1c8a not found: ID does not exist" containerID="8700cff0023603259051ff559b44d3d88435a66490a7ad652c53e64a17fc1c8a" Jan 27 07:14:34 crc kubenswrapper[4872]: I0127 07:14:34.127737 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8700cff0023603259051ff559b44d3d88435a66490a7ad652c53e64a17fc1c8a"} err="failed to get container status \"8700cff0023603259051ff559b44d3d88435a66490a7ad652c53e64a17fc1c8a\": rpc error: code = NotFound desc = could not find container \"8700cff0023603259051ff559b44d3d88435a66490a7ad652c53e64a17fc1c8a\": container with ID starting with 8700cff0023603259051ff559b44d3d88435a66490a7ad652c53e64a17fc1c8a not found: ID does not exist" Jan 27 07:14:55 crc kubenswrapper[4872]: I0127 07:14:55.001369 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:14:55 crc kubenswrapper[4872]: I0127 07:14:55.002145 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.155717 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5"] Jan 27 07:15:00 crc kubenswrapper[4872]: E0127 07:15:00.156486 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b8df7a-1a63-4734-a838-b8f07b10f21c" containerName="gather" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.156506 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b8df7a-1a63-4734-a838-b8f07b10f21c" containerName="gather" Jan 27 07:15:00 crc kubenswrapper[4872]: E0127 07:15:00.156529 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b8df7a-1a63-4734-a838-b8f07b10f21c" containerName="copy" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.156544 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b8df7a-1a63-4734-a838-b8f07b10f21c" containerName="copy" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.156773 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="45b8df7a-1a63-4734-a838-b8f07b10f21c" containerName="copy" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.156798 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="45b8df7a-1a63-4734-a838-b8f07b10f21c" containerName="gather" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.157510 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.160137 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.161633 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.177302 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5"] Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.308152 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-secret-volume\") pod \"collect-profiles-29491635-n5kd5\" (UID: \"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.308228 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-config-volume\") pod \"collect-profiles-29491635-n5kd5\" (UID: \"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.308278 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2s5x\" (UniqueName: \"kubernetes.io/projected/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-kube-api-access-w2s5x\") pod \"collect-profiles-29491635-n5kd5\" (UID: \"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.410064 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-secret-volume\") pod \"collect-profiles-29491635-n5kd5\" (UID: \"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.410125 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-config-volume\") pod \"collect-profiles-29491635-n5kd5\" (UID: \"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.410173 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2s5x\" (UniqueName: \"kubernetes.io/projected/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-kube-api-access-w2s5x\") pod \"collect-profiles-29491635-n5kd5\" (UID: \"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.411235 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-config-volume\") pod \"collect-profiles-29491635-n5kd5\" (UID: \"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.421294 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-secret-volume\") pod \"collect-profiles-29491635-n5kd5\" (UID: \"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.433873 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2s5x\" (UniqueName: \"kubernetes.io/projected/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-kube-api-access-w2s5x\") pod \"collect-profiles-29491635-n5kd5\" (UID: \"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.488035 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5" Jan 27 07:15:00 crc kubenswrapper[4872]: I0127 07:15:00.969126 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5"] Jan 27 07:15:01 crc kubenswrapper[4872]: I0127 07:15:01.453661 4872 generic.go:334] "Generic (PLEG): container finished" podID="2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad" containerID="fd3ace1215980cb92a3a114e90583c64aa69be622e332049354a312b5b3fdb32" exitCode=0 Jan 27 07:15:01 crc kubenswrapper[4872]: I0127 07:15:01.453728 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5" event={"ID":"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad","Type":"ContainerDied","Data":"fd3ace1215980cb92a3a114e90583c64aa69be622e332049354a312b5b3fdb32"} Jan 27 07:15:01 crc kubenswrapper[4872]: I0127 07:15:01.454089 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5" event={"ID":"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad","Type":"ContainerStarted","Data":"876fa706ddbea4afab2256e1bbf32de91b6d86b80f1970cc2e67bb89dddc4ab9"} Jan 27 07:15:02 crc kubenswrapper[4872]: I0127 07:15:02.715153 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5" Jan 27 07:15:02 crc kubenswrapper[4872]: I0127 07:15:02.768402 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-secret-volume\") pod \"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad\" (UID: \"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad\") " Jan 27 07:15:02 crc kubenswrapper[4872]: I0127 07:15:02.774112 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad" (UID: "2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:15:02 crc kubenswrapper[4872]: I0127 07:15:02.868985 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2s5x\" (UniqueName: \"kubernetes.io/projected/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-kube-api-access-w2s5x\") pod \"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad\" (UID: \"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad\") " Jan 27 07:15:02 crc kubenswrapper[4872]: I0127 07:15:02.869307 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-config-volume\") pod \"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad\" (UID: \"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad\") " Jan 27 07:15:02 crc kubenswrapper[4872]: I0127 07:15:02.869726 4872 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:15:02 crc kubenswrapper[4872]: I0127 07:15:02.869881 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-config-volume" (OuterVolumeSpecName: "config-volume") pod "2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad" (UID: "2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:15:02 crc kubenswrapper[4872]: I0127 07:15:02.872276 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-kube-api-access-w2s5x" (OuterVolumeSpecName: "kube-api-access-w2s5x") pod "2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad" (UID: "2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad"). InnerVolumeSpecName "kube-api-access-w2s5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:15:02 crc kubenswrapper[4872]: I0127 07:15:02.970911 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2s5x\" (UniqueName: \"kubernetes.io/projected/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-kube-api-access-w2s5x\") on node \"crc\" DevicePath \"\"" Jan 27 07:15:02 crc kubenswrapper[4872]: I0127 07:15:02.970938 4872 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:15:03 crc kubenswrapper[4872]: I0127 07:15:03.467461 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5" event={"ID":"2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad","Type":"ContainerDied","Data":"876fa706ddbea4afab2256e1bbf32de91b6d86b80f1970cc2e67bb89dddc4ab9"} Jan 27 07:15:03 crc kubenswrapper[4872]: I0127 07:15:03.467503 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="876fa706ddbea4afab2256e1bbf32de91b6d86b80f1970cc2e67bb89dddc4ab9" Jan 27 07:15:03 crc kubenswrapper[4872]: I0127 07:15:03.467972 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491635-n5kd5" Jan 27 07:15:25 crc kubenswrapper[4872]: I0127 07:15:25.001751 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:15:25 crc kubenswrapper[4872]: I0127 07:15:25.002283 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:15:55 crc kubenswrapper[4872]: I0127 07:15:55.001140 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:15:55 crc kubenswrapper[4872]: I0127 07:15:55.001795 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:15:55 crc kubenswrapper[4872]: I0127 07:15:55.001891 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" Jan 27 07:15:55 crc kubenswrapper[4872]: I0127 07:15:55.003026 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa5d70964ed817705de4b0d0be7485d7dbadb35ad0952fe0f6c6da8a41ab3353"} pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:15:55 crc kubenswrapper[4872]: I0127 07:15:55.003140 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" containerID="cri-o://aa5d70964ed817705de4b0d0be7485d7dbadb35ad0952fe0f6c6da8a41ab3353" gracePeriod=600 Jan 27 07:15:55 crc kubenswrapper[4872]: I0127 07:15:55.803853 4872 generic.go:334] "Generic (PLEG): container finished" podID="5ea42312-a362-48cd-8387-34c060df18a1" containerID="aa5d70964ed817705de4b0d0be7485d7dbadb35ad0952fe0f6c6da8a41ab3353" exitCode=0 Jan 27 07:15:55 crc kubenswrapper[4872]: I0127 07:15:55.803886 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerDied","Data":"aa5d70964ed817705de4b0d0be7485d7dbadb35ad0952fe0f6c6da8a41ab3353"} Jan 27 07:15:55 crc kubenswrapper[4872]: I0127 07:15:55.804408 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" event={"ID":"5ea42312-a362-48cd-8387-34c060df18a1","Type":"ContainerStarted","Data":"0de606a66e7906578887d2cfabdd8907ce32b62895f55933b56db113bf940b2c"} Jan 27 07:15:55 crc kubenswrapper[4872]: I0127 07:15:55.804443 4872 scope.go:117] "RemoveContainer" containerID="4a0bbd673e22866101d108b8afff2324f2f3a4a3901f3a1131db6e0b5e06bc9b" Jan 27 07:16:33 crc kubenswrapper[4872]: I0127 07:16:33.966210 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6l6mq"] Jan 27 07:16:33 crc kubenswrapper[4872]: E0127 07:16:33.967152 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad" containerName="collect-profiles" Jan 27 07:16:33 crc kubenswrapper[4872]: I0127 07:16:33.967171 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad" containerName="collect-profiles" Jan 27 07:16:33 crc kubenswrapper[4872]: I0127 07:16:33.967329 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0a1cc6-8ae6-46f4-bf39-ae37d5ad6cad" containerName="collect-profiles" Jan 27 07:16:33 crc kubenswrapper[4872]: I0127 07:16:33.968477 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:33 crc kubenswrapper[4872]: I0127 07:16:33.989521 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6l6mq"] Jan 27 07:16:34 crc kubenswrapper[4872]: I0127 07:16:34.100949 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsm7c\" (UniqueName: \"kubernetes.io/projected/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-kube-api-access-vsm7c\") pod \"community-operators-6l6mq\" (UID: \"8972f878-d003-4d0f-bbf6-f1024ea4f2ed\") " pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:34 crc kubenswrapper[4872]: I0127 07:16:34.101000 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-utilities\") pod \"community-operators-6l6mq\" (UID: \"8972f878-d003-4d0f-bbf6-f1024ea4f2ed\") " pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:34 crc kubenswrapper[4872]: I0127 07:16:34.101066 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-catalog-content\") pod \"community-operators-6l6mq\" (UID: \"8972f878-d003-4d0f-bbf6-f1024ea4f2ed\") " pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:34 crc kubenswrapper[4872]: I0127 07:16:34.202750 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsm7c\" (UniqueName: \"kubernetes.io/projected/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-kube-api-access-vsm7c\") pod \"community-operators-6l6mq\" (UID: \"8972f878-d003-4d0f-bbf6-f1024ea4f2ed\") " pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:34 crc kubenswrapper[4872]: I0127 07:16:34.203003 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-utilities\") pod \"community-operators-6l6mq\" (UID: \"8972f878-d003-4d0f-bbf6-f1024ea4f2ed\") " pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:34 crc kubenswrapper[4872]: I0127 07:16:34.203241 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-catalog-content\") pod \"community-operators-6l6mq\" (UID: \"8972f878-d003-4d0f-bbf6-f1024ea4f2ed\") " pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:34 crc kubenswrapper[4872]: I0127 07:16:34.203576 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-utilities\") pod \"community-operators-6l6mq\" (UID: \"8972f878-d003-4d0f-bbf6-f1024ea4f2ed\") " pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:34 crc kubenswrapper[4872]: I0127 07:16:34.203802 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-catalog-content\") pod \"community-operators-6l6mq\" (UID: \"8972f878-d003-4d0f-bbf6-f1024ea4f2ed\") " pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:34 crc kubenswrapper[4872]: I0127 07:16:34.234086 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsm7c\" (UniqueName: \"kubernetes.io/projected/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-kube-api-access-vsm7c\") pod \"community-operators-6l6mq\" (UID: \"8972f878-d003-4d0f-bbf6-f1024ea4f2ed\") " pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:34 crc kubenswrapper[4872]: I0127 07:16:34.288674 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:34 crc kubenswrapper[4872]: I0127 07:16:34.634862 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6l6mq"] Jan 27 07:16:35 crc kubenswrapper[4872]: I0127 07:16:35.070188 4872 generic.go:334] "Generic (PLEG): container finished" podID="8972f878-d003-4d0f-bbf6-f1024ea4f2ed" containerID="5c083b4f3e0468ea1d1476b929b4e3620cc8098024ccbc5730988a43998bb55f" exitCode=0 Jan 27 07:16:35 crc kubenswrapper[4872]: I0127 07:16:35.070403 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6l6mq" event={"ID":"8972f878-d003-4d0f-bbf6-f1024ea4f2ed","Type":"ContainerDied","Data":"5c083b4f3e0468ea1d1476b929b4e3620cc8098024ccbc5730988a43998bb55f"} Jan 27 07:16:35 crc kubenswrapper[4872]: I0127 07:16:35.070427 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6l6mq" event={"ID":"8972f878-d003-4d0f-bbf6-f1024ea4f2ed","Type":"ContainerStarted","Data":"94c0200d60377c309ca8ad833a31848d29bf2cdc51d3494d3b28dbfbc3c8ae78"} Jan 27 07:16:35 crc kubenswrapper[4872]: I0127 07:16:35.072325 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 07:16:36 crc kubenswrapper[4872]: I0127 07:16:36.077396 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6l6mq" event={"ID":"8972f878-d003-4d0f-bbf6-f1024ea4f2ed","Type":"ContainerStarted","Data":"075838a442bbf3cf27f550751ebf7e911aef607aaf57b667174ada0cf83192d7"} Jan 27 07:16:37 crc kubenswrapper[4872]: I0127 07:16:37.086104 4872 generic.go:334] "Generic (PLEG): container finished" podID="8972f878-d003-4d0f-bbf6-f1024ea4f2ed" containerID="075838a442bbf3cf27f550751ebf7e911aef607aaf57b667174ada0cf83192d7" exitCode=0 Jan 27 07:16:37 crc kubenswrapper[4872]: I0127 07:16:37.086199 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6l6mq" event={"ID":"8972f878-d003-4d0f-bbf6-f1024ea4f2ed","Type":"ContainerDied","Data":"075838a442bbf3cf27f550751ebf7e911aef607aaf57b667174ada0cf83192d7"} Jan 27 07:16:38 crc kubenswrapper[4872]: I0127 07:16:38.093557 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6l6mq" event={"ID":"8972f878-d003-4d0f-bbf6-f1024ea4f2ed","Type":"ContainerStarted","Data":"8dc06a7a6357a1295a14a4c3ec9123f7e427173ba827dc5e288f72cad4f8ca15"} Jan 27 07:16:38 crc kubenswrapper[4872]: I0127 07:16:38.112144 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6l6mq" podStartSLOduration=2.568877148 podStartE2EDuration="5.112125398s" podCreationTimestamp="2026-01-27 07:16:33 +0000 UTC" firstStartedPulling="2026-01-27 07:16:35.072141397 +0000 UTC m=+1371.599616583" lastFinishedPulling="2026-01-27 07:16:37.615389637 +0000 UTC m=+1374.142864833" observedRunningTime="2026-01-27 07:16:38.10888568 +0000 UTC m=+1374.636360896" watchObservedRunningTime="2026-01-27 07:16:38.112125398 +0000 UTC m=+1374.639600594" Jan 27 07:16:44 crc kubenswrapper[4872]: I0127 07:16:44.289154 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:44 crc kubenswrapper[4872]: I0127 07:16:44.289731 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:44 crc kubenswrapper[4872]: I0127 07:16:44.339115 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:45 crc kubenswrapper[4872]: I0127 07:16:45.188434 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:45 crc kubenswrapper[4872]: I0127 07:16:45.246227 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6l6mq"] Jan 27 07:16:47 crc kubenswrapper[4872]: I0127 07:16:47.148658 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6l6mq" podUID="8972f878-d003-4d0f-bbf6-f1024ea4f2ed" containerName="registry-server" containerID="cri-o://8dc06a7a6357a1295a14a4c3ec9123f7e427173ba827dc5e288f72cad4f8ca15" gracePeriod=2 Jan 27 07:16:47 crc kubenswrapper[4872]: I0127 07:16:47.602009 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:47 crc kubenswrapper[4872]: I0127 07:16:47.804338 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-utilities\") pod \"8972f878-d003-4d0f-bbf6-f1024ea4f2ed\" (UID: \"8972f878-d003-4d0f-bbf6-f1024ea4f2ed\") " Jan 27 07:16:47 crc kubenswrapper[4872]: I0127 07:16:47.804440 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-catalog-content\") pod \"8972f878-d003-4d0f-bbf6-f1024ea4f2ed\" (UID: \"8972f878-d003-4d0f-bbf6-f1024ea4f2ed\") " Jan 27 07:16:47 crc kubenswrapper[4872]: I0127 07:16:47.804513 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsm7c\" (UniqueName: \"kubernetes.io/projected/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-kube-api-access-vsm7c\") pod \"8972f878-d003-4d0f-bbf6-f1024ea4f2ed\" (UID: \"8972f878-d003-4d0f-bbf6-f1024ea4f2ed\") " Jan 27 07:16:47 crc kubenswrapper[4872]: I0127 07:16:47.806063 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-utilities" (OuterVolumeSpecName: "utilities") pod "8972f878-d003-4d0f-bbf6-f1024ea4f2ed" (UID: "8972f878-d003-4d0f-bbf6-f1024ea4f2ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:16:47 crc kubenswrapper[4872]: I0127 07:16:47.816251 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-kube-api-access-vsm7c" (OuterVolumeSpecName: "kube-api-access-vsm7c") pod "8972f878-d003-4d0f-bbf6-f1024ea4f2ed" (UID: "8972f878-d003-4d0f-bbf6-f1024ea4f2ed"). InnerVolumeSpecName "kube-api-access-vsm7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:16:47 crc kubenswrapper[4872]: I0127 07:16:47.854965 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8972f878-d003-4d0f-bbf6-f1024ea4f2ed" (UID: "8972f878-d003-4d0f-bbf6-f1024ea4f2ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:16:47 crc kubenswrapper[4872]: I0127 07:16:47.905825 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsm7c\" (UniqueName: \"kubernetes.io/projected/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-kube-api-access-vsm7c\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:47 crc kubenswrapper[4872]: I0127 07:16:47.905934 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:47 crc kubenswrapper[4872]: I0127 07:16:47.905946 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8972f878-d003-4d0f-bbf6-f1024ea4f2ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:16:48 crc kubenswrapper[4872]: I0127 07:16:48.158805 4872 generic.go:334] "Generic (PLEG): container finished" podID="8972f878-d003-4d0f-bbf6-f1024ea4f2ed" containerID="8dc06a7a6357a1295a14a4c3ec9123f7e427173ba827dc5e288f72cad4f8ca15" exitCode=0 Jan 27 07:16:48 crc kubenswrapper[4872]: I0127 07:16:48.158857 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6l6mq" event={"ID":"8972f878-d003-4d0f-bbf6-f1024ea4f2ed","Type":"ContainerDied","Data":"8dc06a7a6357a1295a14a4c3ec9123f7e427173ba827dc5e288f72cad4f8ca15"} Jan 27 07:16:48 crc kubenswrapper[4872]: I0127 07:16:48.158885 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6l6mq" event={"ID":"8972f878-d003-4d0f-bbf6-f1024ea4f2ed","Type":"ContainerDied","Data":"94c0200d60377c309ca8ad833a31848d29bf2cdc51d3494d3b28dbfbc3c8ae78"} Jan 27 07:16:48 crc kubenswrapper[4872]: I0127 07:16:48.158903 4872 scope.go:117] "RemoveContainer" containerID="8dc06a7a6357a1295a14a4c3ec9123f7e427173ba827dc5e288f72cad4f8ca15" Jan 27 07:16:48 crc kubenswrapper[4872]: I0127 07:16:48.159008 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6l6mq" Jan 27 07:16:48 crc kubenswrapper[4872]: I0127 07:16:48.186985 4872 scope.go:117] "RemoveContainer" containerID="075838a442bbf3cf27f550751ebf7e911aef607aaf57b667174ada0cf83192d7" Jan 27 07:16:48 crc kubenswrapper[4872]: I0127 07:16:48.189971 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6l6mq"] Jan 27 07:16:48 crc kubenswrapper[4872]: I0127 07:16:48.197410 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6l6mq"] Jan 27 07:16:48 crc kubenswrapper[4872]: I0127 07:16:48.226396 4872 scope.go:117] "RemoveContainer" containerID="5c083b4f3e0468ea1d1476b929b4e3620cc8098024ccbc5730988a43998bb55f" Jan 27 07:16:48 crc kubenswrapper[4872]: I0127 07:16:48.241596 4872 scope.go:117] "RemoveContainer" containerID="8dc06a7a6357a1295a14a4c3ec9123f7e427173ba827dc5e288f72cad4f8ca15" Jan 27 07:16:48 crc kubenswrapper[4872]: E0127 07:16:48.242032 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dc06a7a6357a1295a14a4c3ec9123f7e427173ba827dc5e288f72cad4f8ca15\": container with ID starting with 8dc06a7a6357a1295a14a4c3ec9123f7e427173ba827dc5e288f72cad4f8ca15 not found: ID does not exist" containerID="8dc06a7a6357a1295a14a4c3ec9123f7e427173ba827dc5e288f72cad4f8ca15" Jan 27 07:16:48 crc kubenswrapper[4872]: I0127 07:16:48.242072 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dc06a7a6357a1295a14a4c3ec9123f7e427173ba827dc5e288f72cad4f8ca15"} err="failed to get container status \"8dc06a7a6357a1295a14a4c3ec9123f7e427173ba827dc5e288f72cad4f8ca15\": rpc error: code = NotFound desc = could not find container \"8dc06a7a6357a1295a14a4c3ec9123f7e427173ba827dc5e288f72cad4f8ca15\": container with ID starting with 8dc06a7a6357a1295a14a4c3ec9123f7e427173ba827dc5e288f72cad4f8ca15 not found: ID does not exist" Jan 27 07:16:48 crc kubenswrapper[4872]: I0127 07:16:48.242098 4872 scope.go:117] "RemoveContainer" containerID="075838a442bbf3cf27f550751ebf7e911aef607aaf57b667174ada0cf83192d7" Jan 27 07:16:48 crc kubenswrapper[4872]: E0127 07:16:48.242486 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"075838a442bbf3cf27f550751ebf7e911aef607aaf57b667174ada0cf83192d7\": container with ID starting with 075838a442bbf3cf27f550751ebf7e911aef607aaf57b667174ada0cf83192d7 not found: ID does not exist" containerID="075838a442bbf3cf27f550751ebf7e911aef607aaf57b667174ada0cf83192d7" Jan 27 07:16:48 crc kubenswrapper[4872]: I0127 07:16:48.242524 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"075838a442bbf3cf27f550751ebf7e911aef607aaf57b667174ada0cf83192d7"} err="failed to get container status \"075838a442bbf3cf27f550751ebf7e911aef607aaf57b667174ada0cf83192d7\": rpc error: code = NotFound desc = could not find container \"075838a442bbf3cf27f550751ebf7e911aef607aaf57b667174ada0cf83192d7\": container with ID starting with 075838a442bbf3cf27f550751ebf7e911aef607aaf57b667174ada0cf83192d7 not found: ID does not exist" Jan 27 07:16:48 crc kubenswrapper[4872]: I0127 07:16:48.242549 4872 scope.go:117] "RemoveContainer" containerID="5c083b4f3e0468ea1d1476b929b4e3620cc8098024ccbc5730988a43998bb55f" Jan 27 07:16:48 crc kubenswrapper[4872]: E0127 07:16:48.242958 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c083b4f3e0468ea1d1476b929b4e3620cc8098024ccbc5730988a43998bb55f\": container with ID starting with 5c083b4f3e0468ea1d1476b929b4e3620cc8098024ccbc5730988a43998bb55f not found: ID does not exist" containerID="5c083b4f3e0468ea1d1476b929b4e3620cc8098024ccbc5730988a43998bb55f" Jan 27 07:16:48 crc kubenswrapper[4872]: I0127 07:16:48.243012 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c083b4f3e0468ea1d1476b929b4e3620cc8098024ccbc5730988a43998bb55f"} err="failed to get container status \"5c083b4f3e0468ea1d1476b929b4e3620cc8098024ccbc5730988a43998bb55f\": rpc error: code = NotFound desc = could not find container \"5c083b4f3e0468ea1d1476b929b4e3620cc8098024ccbc5730988a43998bb55f\": container with ID starting with 5c083b4f3e0468ea1d1476b929b4e3620cc8098024ccbc5730988a43998bb55f not found: ID does not exist" Jan 27 07:16:50 crc kubenswrapper[4872]: I0127 07:16:50.107465 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8972f878-d003-4d0f-bbf6-f1024ea4f2ed" path="/var/lib/kubelet/pods/8972f878-d003-4d0f-bbf6-f1024ea4f2ed/volumes" Jan 27 07:17:15 crc kubenswrapper[4872]: I0127 07:17:15.723531 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7xq5v"] Jan 27 07:17:15 crc kubenswrapper[4872]: E0127 07:17:15.724375 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8972f878-d003-4d0f-bbf6-f1024ea4f2ed" containerName="extract-utilities" Jan 27 07:17:15 crc kubenswrapper[4872]: I0127 07:17:15.724391 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="8972f878-d003-4d0f-bbf6-f1024ea4f2ed" containerName="extract-utilities" Jan 27 07:17:15 crc kubenswrapper[4872]: E0127 07:17:15.724407 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8972f878-d003-4d0f-bbf6-f1024ea4f2ed" containerName="extract-content" Jan 27 07:17:15 crc kubenswrapper[4872]: I0127 07:17:15.724416 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="8972f878-d003-4d0f-bbf6-f1024ea4f2ed" containerName="extract-content" Jan 27 07:17:15 crc kubenswrapper[4872]: E0127 07:17:15.724429 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8972f878-d003-4d0f-bbf6-f1024ea4f2ed" containerName="registry-server" Jan 27 07:17:15 crc kubenswrapper[4872]: I0127 07:17:15.724438 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="8972f878-d003-4d0f-bbf6-f1024ea4f2ed" containerName="registry-server" Jan 27 07:17:15 crc kubenswrapper[4872]: I0127 07:17:15.724641 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="8972f878-d003-4d0f-bbf6-f1024ea4f2ed" containerName="registry-server" Jan 27 07:17:15 crc kubenswrapper[4872]: I0127 07:17:15.729930 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:15 crc kubenswrapper[4872]: I0127 07:17:15.790660 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xq5v"] Jan 27 07:17:15 crc kubenswrapper[4872]: I0127 07:17:15.899075 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtzhk\" (UniqueName: \"kubernetes.io/projected/45200522-41fd-42ab-9901-917c150d2106-kube-api-access-vtzhk\") pod \"redhat-marketplace-7xq5v\" (UID: \"45200522-41fd-42ab-9901-917c150d2106\") " pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:15 crc kubenswrapper[4872]: I0127 07:17:15.899387 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45200522-41fd-42ab-9901-917c150d2106-utilities\") pod \"redhat-marketplace-7xq5v\" (UID: \"45200522-41fd-42ab-9901-917c150d2106\") " pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:15 crc kubenswrapper[4872]: I0127 07:17:15.899480 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45200522-41fd-42ab-9901-917c150d2106-catalog-content\") pod \"redhat-marketplace-7xq5v\" (UID: \"45200522-41fd-42ab-9901-917c150d2106\") " pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:16 crc kubenswrapper[4872]: I0127 07:17:16.000919 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtzhk\" (UniqueName: \"kubernetes.io/projected/45200522-41fd-42ab-9901-917c150d2106-kube-api-access-vtzhk\") pod \"redhat-marketplace-7xq5v\" (UID: \"45200522-41fd-42ab-9901-917c150d2106\") " pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:16 crc kubenswrapper[4872]: I0127 07:17:16.000982 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45200522-41fd-42ab-9901-917c150d2106-utilities\") pod \"redhat-marketplace-7xq5v\" (UID: \"45200522-41fd-42ab-9901-917c150d2106\") " pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:16 crc kubenswrapper[4872]: I0127 07:17:16.001002 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45200522-41fd-42ab-9901-917c150d2106-catalog-content\") pod \"redhat-marketplace-7xq5v\" (UID: \"45200522-41fd-42ab-9901-917c150d2106\") " pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:16 crc kubenswrapper[4872]: I0127 07:17:16.001600 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45200522-41fd-42ab-9901-917c150d2106-catalog-content\") pod \"redhat-marketplace-7xq5v\" (UID: \"45200522-41fd-42ab-9901-917c150d2106\") " pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:16 crc kubenswrapper[4872]: I0127 07:17:16.001585 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45200522-41fd-42ab-9901-917c150d2106-utilities\") pod \"redhat-marketplace-7xq5v\" (UID: \"45200522-41fd-42ab-9901-917c150d2106\") " pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:16 crc kubenswrapper[4872]: I0127 07:17:16.019689 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtzhk\" (UniqueName: \"kubernetes.io/projected/45200522-41fd-42ab-9901-917c150d2106-kube-api-access-vtzhk\") pod \"redhat-marketplace-7xq5v\" (UID: \"45200522-41fd-42ab-9901-917c150d2106\") " pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:16 crc kubenswrapper[4872]: I0127 07:17:16.061770 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:16 crc kubenswrapper[4872]: I0127 07:17:16.551247 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xq5v"] Jan 27 07:17:17 crc kubenswrapper[4872]: I0127 07:17:17.374636 4872 generic.go:334] "Generic (PLEG): container finished" podID="45200522-41fd-42ab-9901-917c150d2106" containerID="6e0de6c5018bc1fc3ffc69e21e49029fc4c809d5a0e44df7a7467d0965184db4" exitCode=0 Jan 27 07:17:17 crc kubenswrapper[4872]: I0127 07:17:17.374760 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xq5v" event={"ID":"45200522-41fd-42ab-9901-917c150d2106","Type":"ContainerDied","Data":"6e0de6c5018bc1fc3ffc69e21e49029fc4c809d5a0e44df7a7467d0965184db4"} Jan 27 07:17:17 crc kubenswrapper[4872]: I0127 07:17:17.374958 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xq5v" event={"ID":"45200522-41fd-42ab-9901-917c150d2106","Type":"ContainerStarted","Data":"e9d57c725c9c55fe34ae48333c2127918b65d23a4d4733cc91a78c88035b66ae"} Jan 27 07:17:18 crc kubenswrapper[4872]: I0127 07:17:18.385426 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xq5v" event={"ID":"45200522-41fd-42ab-9901-917c150d2106","Type":"ContainerStarted","Data":"1b6b6ffcb621e920083cc1b4b33c0a679408173d0bdf2f07d00fb30325f18627"} Jan 27 07:17:19 crc kubenswrapper[4872]: I0127 07:17:19.394481 4872 generic.go:334] "Generic (PLEG): container finished" podID="45200522-41fd-42ab-9901-917c150d2106" containerID="1b6b6ffcb621e920083cc1b4b33c0a679408173d0bdf2f07d00fb30325f18627" exitCode=0 Jan 27 07:17:19 crc kubenswrapper[4872]: I0127 07:17:19.394593 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xq5v" event={"ID":"45200522-41fd-42ab-9901-917c150d2106","Type":"ContainerDied","Data":"1b6b6ffcb621e920083cc1b4b33c0a679408173d0bdf2f07d00fb30325f18627"} Jan 27 07:17:19 crc kubenswrapper[4872]: I0127 07:17:19.395075 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xq5v" event={"ID":"45200522-41fd-42ab-9901-917c150d2106","Type":"ContainerStarted","Data":"80e508a4f6ee4dfee0f3bdcce1c10a106930e4f5e8cb7ad676468c5adcbd9ef9"} Jan 27 07:17:19 crc kubenswrapper[4872]: I0127 07:17:19.420293 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7xq5v" podStartSLOduration=2.982776586 podStartE2EDuration="4.420274304s" podCreationTimestamp="2026-01-27 07:17:15 +0000 UTC" firstStartedPulling="2026-01-27 07:17:17.376097667 +0000 UTC m=+1413.903572863" lastFinishedPulling="2026-01-27 07:17:18.813595385 +0000 UTC m=+1415.341070581" observedRunningTime="2026-01-27 07:17:19.418277759 +0000 UTC m=+1415.945752955" watchObservedRunningTime="2026-01-27 07:17:19.420274304 +0000 UTC m=+1415.947749500" Jan 27 07:17:26 crc kubenswrapper[4872]: I0127 07:17:26.061872 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:26 crc kubenswrapper[4872]: I0127 07:17:26.063382 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:26 crc kubenswrapper[4872]: I0127 07:17:26.120352 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:26 crc kubenswrapper[4872]: I0127 07:17:26.490353 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:26 crc kubenswrapper[4872]: I0127 07:17:26.550011 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xq5v"] Jan 27 07:17:28 crc kubenswrapper[4872]: I0127 07:17:28.451818 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7xq5v" podUID="45200522-41fd-42ab-9901-917c150d2106" containerName="registry-server" containerID="cri-o://80e508a4f6ee4dfee0f3bdcce1c10a106930e4f5e8cb7ad676468c5adcbd9ef9" gracePeriod=2 Jan 27 07:17:28 crc kubenswrapper[4872]: I0127 07:17:28.992695 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.073629 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45200522-41fd-42ab-9901-917c150d2106-utilities\") pod \"45200522-41fd-42ab-9901-917c150d2106\" (UID: \"45200522-41fd-42ab-9901-917c150d2106\") " Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.073713 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45200522-41fd-42ab-9901-917c150d2106-catalog-content\") pod \"45200522-41fd-42ab-9901-917c150d2106\" (UID: \"45200522-41fd-42ab-9901-917c150d2106\") " Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.073835 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtzhk\" (UniqueName: \"kubernetes.io/projected/45200522-41fd-42ab-9901-917c150d2106-kube-api-access-vtzhk\") pod \"45200522-41fd-42ab-9901-917c150d2106\" (UID: \"45200522-41fd-42ab-9901-917c150d2106\") " Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.074517 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45200522-41fd-42ab-9901-917c150d2106-utilities" (OuterVolumeSpecName: "utilities") pod "45200522-41fd-42ab-9901-917c150d2106" (UID: "45200522-41fd-42ab-9901-917c150d2106"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.078826 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45200522-41fd-42ab-9901-917c150d2106-kube-api-access-vtzhk" (OuterVolumeSpecName: "kube-api-access-vtzhk") pod "45200522-41fd-42ab-9901-917c150d2106" (UID: "45200522-41fd-42ab-9901-917c150d2106"). InnerVolumeSpecName "kube-api-access-vtzhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.096707 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45200522-41fd-42ab-9901-917c150d2106-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45200522-41fd-42ab-9901-917c150d2106" (UID: "45200522-41fd-42ab-9901-917c150d2106"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.175617 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45200522-41fd-42ab-9901-917c150d2106-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.175650 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45200522-41fd-42ab-9901-917c150d2106-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.175659 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtzhk\" (UniqueName: \"kubernetes.io/projected/45200522-41fd-42ab-9901-917c150d2106-kube-api-access-vtzhk\") on node \"crc\" DevicePath \"\"" Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.459719 4872 generic.go:334] "Generic (PLEG): container finished" podID="45200522-41fd-42ab-9901-917c150d2106" containerID="80e508a4f6ee4dfee0f3bdcce1c10a106930e4f5e8cb7ad676468c5adcbd9ef9" exitCode=0 Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.459763 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xq5v" event={"ID":"45200522-41fd-42ab-9901-917c150d2106","Type":"ContainerDied","Data":"80e508a4f6ee4dfee0f3bdcce1c10a106930e4f5e8cb7ad676468c5adcbd9ef9"} Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.459821 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xq5v" event={"ID":"45200522-41fd-42ab-9901-917c150d2106","Type":"ContainerDied","Data":"e9d57c725c9c55fe34ae48333c2127918b65d23a4d4733cc91a78c88035b66ae"} Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.459864 4872 scope.go:117] "RemoveContainer" containerID="80e508a4f6ee4dfee0f3bdcce1c10a106930e4f5e8cb7ad676468c5adcbd9ef9" Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.460555 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xq5v" Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.483402 4872 scope.go:117] "RemoveContainer" containerID="1b6b6ffcb621e920083cc1b4b33c0a679408173d0bdf2f07d00fb30325f18627" Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.501632 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xq5v"] Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.507887 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xq5v"] Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.531498 4872 scope.go:117] "RemoveContainer" containerID="6e0de6c5018bc1fc3ffc69e21e49029fc4c809d5a0e44df7a7467d0965184db4" Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.556701 4872 scope.go:117] "RemoveContainer" containerID="80e508a4f6ee4dfee0f3bdcce1c10a106930e4f5e8cb7ad676468c5adcbd9ef9" Jan 27 07:17:29 crc kubenswrapper[4872]: E0127 07:17:29.557259 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e508a4f6ee4dfee0f3bdcce1c10a106930e4f5e8cb7ad676468c5adcbd9ef9\": container with ID starting with 80e508a4f6ee4dfee0f3bdcce1c10a106930e4f5e8cb7ad676468c5adcbd9ef9 not found: ID does not exist" containerID="80e508a4f6ee4dfee0f3bdcce1c10a106930e4f5e8cb7ad676468c5adcbd9ef9" Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.557303 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e508a4f6ee4dfee0f3bdcce1c10a106930e4f5e8cb7ad676468c5adcbd9ef9"} err="failed to get container status \"80e508a4f6ee4dfee0f3bdcce1c10a106930e4f5e8cb7ad676468c5adcbd9ef9\": rpc error: code = NotFound desc = could not find container \"80e508a4f6ee4dfee0f3bdcce1c10a106930e4f5e8cb7ad676468c5adcbd9ef9\": container with ID starting with 80e508a4f6ee4dfee0f3bdcce1c10a106930e4f5e8cb7ad676468c5adcbd9ef9 not found: ID does not exist" Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.557333 4872 scope.go:117] "RemoveContainer" containerID="1b6b6ffcb621e920083cc1b4b33c0a679408173d0bdf2f07d00fb30325f18627" Jan 27 07:17:29 crc kubenswrapper[4872]: E0127 07:17:29.557771 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b6b6ffcb621e920083cc1b4b33c0a679408173d0bdf2f07d00fb30325f18627\": container with ID starting with 1b6b6ffcb621e920083cc1b4b33c0a679408173d0bdf2f07d00fb30325f18627 not found: ID does not exist" containerID="1b6b6ffcb621e920083cc1b4b33c0a679408173d0bdf2f07d00fb30325f18627" Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.557813 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b6b6ffcb621e920083cc1b4b33c0a679408173d0bdf2f07d00fb30325f18627"} err="failed to get container status \"1b6b6ffcb621e920083cc1b4b33c0a679408173d0bdf2f07d00fb30325f18627\": rpc error: code = NotFound desc = could not find container \"1b6b6ffcb621e920083cc1b4b33c0a679408173d0bdf2f07d00fb30325f18627\": container with ID starting with 1b6b6ffcb621e920083cc1b4b33c0a679408173d0bdf2f07d00fb30325f18627 not found: ID does not exist" Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.557844 4872 scope.go:117] "RemoveContainer" containerID="6e0de6c5018bc1fc3ffc69e21e49029fc4c809d5a0e44df7a7467d0965184db4" Jan 27 07:17:29 crc kubenswrapper[4872]: E0127 07:17:29.558167 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e0de6c5018bc1fc3ffc69e21e49029fc4c809d5a0e44df7a7467d0965184db4\": container with ID starting with 6e0de6c5018bc1fc3ffc69e21e49029fc4c809d5a0e44df7a7467d0965184db4 not found: ID does not exist" containerID="6e0de6c5018bc1fc3ffc69e21e49029fc4c809d5a0e44df7a7467d0965184db4" Jan 27 07:17:29 crc kubenswrapper[4872]: I0127 07:17:29.558273 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e0de6c5018bc1fc3ffc69e21e49029fc4c809d5a0e44df7a7467d0965184db4"} err="failed to get container status \"6e0de6c5018bc1fc3ffc69e21e49029fc4c809d5a0e44df7a7467d0965184db4\": rpc error: code = NotFound desc = could not find container \"6e0de6c5018bc1fc3ffc69e21e49029fc4c809d5a0e44df7a7467d0965184db4\": container with ID starting with 6e0de6c5018bc1fc3ffc69e21e49029fc4c809d5a0e44df7a7467d0965184db4 not found: ID does not exist" Jan 27 07:17:30 crc kubenswrapper[4872]: I0127 07:17:30.112759 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45200522-41fd-42ab-9901-917c150d2106" path="/var/lib/kubelet/pods/45200522-41fd-42ab-9901-917c150d2106/volumes" Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.353127 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rnvg5"] Jan 27 07:17:40 crc kubenswrapper[4872]: E0127 07:17:40.354268 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45200522-41fd-42ab-9901-917c150d2106" containerName="extract-content" Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.354284 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="45200522-41fd-42ab-9901-917c150d2106" containerName="extract-content" Jan 27 07:17:40 crc kubenswrapper[4872]: E0127 07:17:40.354307 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45200522-41fd-42ab-9901-917c150d2106" containerName="registry-server" Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.354316 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="45200522-41fd-42ab-9901-917c150d2106" containerName="registry-server" Jan 27 07:17:40 crc kubenswrapper[4872]: E0127 07:17:40.354339 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45200522-41fd-42ab-9901-917c150d2106" containerName="extract-utilities" Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.354349 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="45200522-41fd-42ab-9901-917c150d2106" containerName="extract-utilities" Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.354657 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="45200522-41fd-42ab-9901-917c150d2106" containerName="registry-server" Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.356695 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.360678 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rnvg5"] Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.449968 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e9d487-e77c-4aa2-a081-8a8057280ee5-catalog-content\") pod \"redhat-operators-rnvg5\" (UID: \"a7e9d487-e77c-4aa2-a081-8a8057280ee5\") " pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.450009 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e9d487-e77c-4aa2-a081-8a8057280ee5-utilities\") pod \"redhat-operators-rnvg5\" (UID: \"a7e9d487-e77c-4aa2-a081-8a8057280ee5\") " pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.450149 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6rc2\" (UniqueName: \"kubernetes.io/projected/a7e9d487-e77c-4aa2-a081-8a8057280ee5-kube-api-access-g6rc2\") pod \"redhat-operators-rnvg5\" (UID: \"a7e9d487-e77c-4aa2-a081-8a8057280ee5\") " pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.551996 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e9d487-e77c-4aa2-a081-8a8057280ee5-catalog-content\") pod \"redhat-operators-rnvg5\" (UID: \"a7e9d487-e77c-4aa2-a081-8a8057280ee5\") " pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.552048 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e9d487-e77c-4aa2-a081-8a8057280ee5-utilities\") pod \"redhat-operators-rnvg5\" (UID: \"a7e9d487-e77c-4aa2-a081-8a8057280ee5\") " pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.552088 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6rc2\" (UniqueName: \"kubernetes.io/projected/a7e9d487-e77c-4aa2-a081-8a8057280ee5-kube-api-access-g6rc2\") pod \"redhat-operators-rnvg5\" (UID: \"a7e9d487-e77c-4aa2-a081-8a8057280ee5\") " pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.552507 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e9d487-e77c-4aa2-a081-8a8057280ee5-catalog-content\") pod \"redhat-operators-rnvg5\" (UID: \"a7e9d487-e77c-4aa2-a081-8a8057280ee5\") " pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.552571 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e9d487-e77c-4aa2-a081-8a8057280ee5-utilities\") pod \"redhat-operators-rnvg5\" (UID: \"a7e9d487-e77c-4aa2-a081-8a8057280ee5\") " pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.572795 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6rc2\" (UniqueName: \"kubernetes.io/projected/a7e9d487-e77c-4aa2-a081-8a8057280ee5-kube-api-access-g6rc2\") pod \"redhat-operators-rnvg5\" (UID: \"a7e9d487-e77c-4aa2-a081-8a8057280ee5\") " pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:40 crc kubenswrapper[4872]: I0127 07:17:40.702241 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:41 crc kubenswrapper[4872]: I0127 07:17:41.094798 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rnvg5"] Jan 27 07:17:41 crc kubenswrapper[4872]: I0127 07:17:41.550725 4872 generic.go:334] "Generic (PLEG): container finished" podID="a7e9d487-e77c-4aa2-a081-8a8057280ee5" containerID="52255533101ec432f01ff03ae7fab3c01ababe2510c6137aca7809f71fa3b727" exitCode=0 Jan 27 07:17:41 crc kubenswrapper[4872]: I0127 07:17:41.550785 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnvg5" event={"ID":"a7e9d487-e77c-4aa2-a081-8a8057280ee5","Type":"ContainerDied","Data":"52255533101ec432f01ff03ae7fab3c01ababe2510c6137aca7809f71fa3b727"} Jan 27 07:17:41 crc kubenswrapper[4872]: I0127 07:17:41.551011 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnvg5" event={"ID":"a7e9d487-e77c-4aa2-a081-8a8057280ee5","Type":"ContainerStarted","Data":"472aee41caef343cf28c4c1914efc2d5cd28c3c34bc1c7c92b6e91a982a7789d"} Jan 27 07:17:42 crc kubenswrapper[4872]: I0127 07:17:42.562053 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnvg5" event={"ID":"a7e9d487-e77c-4aa2-a081-8a8057280ee5","Type":"ContainerStarted","Data":"13993eb589e617e84302c3381f2585d9817ad04588daedebe11fc3ad50005c5a"} Jan 27 07:17:43 crc kubenswrapper[4872]: I0127 07:17:43.570679 4872 generic.go:334] "Generic (PLEG): container finished" podID="a7e9d487-e77c-4aa2-a081-8a8057280ee5" containerID="13993eb589e617e84302c3381f2585d9817ad04588daedebe11fc3ad50005c5a" exitCode=0 Jan 27 07:17:43 crc kubenswrapper[4872]: I0127 07:17:43.570733 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnvg5" event={"ID":"a7e9d487-e77c-4aa2-a081-8a8057280ee5","Type":"ContainerDied","Data":"13993eb589e617e84302c3381f2585d9817ad04588daedebe11fc3ad50005c5a"} Jan 27 07:17:44 crc kubenswrapper[4872]: I0127 07:17:44.581676 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnvg5" event={"ID":"a7e9d487-e77c-4aa2-a081-8a8057280ee5","Type":"ContainerStarted","Data":"406c23203ee2950ba8a11a72f6573501d2fd5b2a46d4b40dc72fe9571009465e"} Jan 27 07:17:44 crc kubenswrapper[4872]: I0127 07:17:44.622377 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rnvg5" podStartSLOduration=2.147679081 podStartE2EDuration="4.622353918s" podCreationTimestamp="2026-01-27 07:17:40 +0000 UTC" firstStartedPulling="2026-01-27 07:17:41.552573217 +0000 UTC m=+1438.080048413" lastFinishedPulling="2026-01-27 07:17:44.027248024 +0000 UTC m=+1440.554723250" observedRunningTime="2026-01-27 07:17:44.610950208 +0000 UTC m=+1441.138425414" watchObservedRunningTime="2026-01-27 07:17:44.622353918 +0000 UTC m=+1441.149829114" Jan 27 07:17:50 crc kubenswrapper[4872]: I0127 07:17:50.702958 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:50 crc kubenswrapper[4872]: I0127 07:17:50.703911 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:50 crc kubenswrapper[4872]: I0127 07:17:50.765081 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:51 crc kubenswrapper[4872]: I0127 07:17:51.694633 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:51 crc kubenswrapper[4872]: I0127 07:17:51.754983 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rnvg5"] Jan 27 07:17:53 crc kubenswrapper[4872]: I0127 07:17:53.643405 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rnvg5" podUID="a7e9d487-e77c-4aa2-a081-8a8057280ee5" containerName="registry-server" containerID="cri-o://406c23203ee2950ba8a11a72f6573501d2fd5b2a46d4b40dc72fe9571009465e" gracePeriod=2 Jan 27 07:17:54 crc kubenswrapper[4872]: I0127 07:17:54.653644 4872 generic.go:334] "Generic (PLEG): container finished" podID="a7e9d487-e77c-4aa2-a081-8a8057280ee5" containerID="406c23203ee2950ba8a11a72f6573501d2fd5b2a46d4b40dc72fe9571009465e" exitCode=0 Jan 27 07:17:54 crc kubenswrapper[4872]: I0127 07:17:54.653732 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnvg5" event={"ID":"a7e9d487-e77c-4aa2-a081-8a8057280ee5","Type":"ContainerDied","Data":"406c23203ee2950ba8a11a72f6573501d2fd5b2a46d4b40dc72fe9571009465e"} Jan 27 07:17:54 crc kubenswrapper[4872]: I0127 07:17:54.858739 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:54 crc kubenswrapper[4872]: I0127 07:17:54.991675 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e9d487-e77c-4aa2-a081-8a8057280ee5-utilities\") pod \"a7e9d487-e77c-4aa2-a081-8a8057280ee5\" (UID: \"a7e9d487-e77c-4aa2-a081-8a8057280ee5\") " Jan 27 07:17:54 crc kubenswrapper[4872]: I0127 07:17:54.991778 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e9d487-e77c-4aa2-a081-8a8057280ee5-catalog-content\") pod \"a7e9d487-e77c-4aa2-a081-8a8057280ee5\" (UID: \"a7e9d487-e77c-4aa2-a081-8a8057280ee5\") " Jan 27 07:17:54 crc kubenswrapper[4872]: I0127 07:17:54.991864 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6rc2\" (UniqueName: \"kubernetes.io/projected/a7e9d487-e77c-4aa2-a081-8a8057280ee5-kube-api-access-g6rc2\") pod \"a7e9d487-e77c-4aa2-a081-8a8057280ee5\" (UID: \"a7e9d487-e77c-4aa2-a081-8a8057280ee5\") " Jan 27 07:17:54 crc kubenswrapper[4872]: I0127 07:17:54.993142 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7e9d487-e77c-4aa2-a081-8a8057280ee5-utilities" (OuterVolumeSpecName: "utilities") pod "a7e9d487-e77c-4aa2-a081-8a8057280ee5" (UID: "a7e9d487-e77c-4aa2-a081-8a8057280ee5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:17:55 crc kubenswrapper[4872]: I0127 07:17:55.003618 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:17:55 crc kubenswrapper[4872]: I0127 07:17:55.003676 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:17:55 crc kubenswrapper[4872]: I0127 07:17:55.010023 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7e9d487-e77c-4aa2-a081-8a8057280ee5-kube-api-access-g6rc2" (OuterVolumeSpecName: "kube-api-access-g6rc2") pod "a7e9d487-e77c-4aa2-a081-8a8057280ee5" (UID: "a7e9d487-e77c-4aa2-a081-8a8057280ee5"). InnerVolumeSpecName "kube-api-access-g6rc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:17:55 crc kubenswrapper[4872]: I0127 07:17:55.093192 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e9d487-e77c-4aa2-a081-8a8057280ee5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:17:55 crc kubenswrapper[4872]: I0127 07:17:55.093229 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6rc2\" (UniqueName: \"kubernetes.io/projected/a7e9d487-e77c-4aa2-a081-8a8057280ee5-kube-api-access-g6rc2\") on node \"crc\" DevicePath \"\"" Jan 27 07:17:55 crc kubenswrapper[4872]: I0127 07:17:55.100466 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7e9d487-e77c-4aa2-a081-8a8057280ee5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7e9d487-e77c-4aa2-a081-8a8057280ee5" (UID: "a7e9d487-e77c-4aa2-a081-8a8057280ee5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:17:55 crc kubenswrapper[4872]: I0127 07:17:55.195212 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e9d487-e77c-4aa2-a081-8a8057280ee5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:17:55 crc kubenswrapper[4872]: I0127 07:17:55.661352 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnvg5" event={"ID":"a7e9d487-e77c-4aa2-a081-8a8057280ee5","Type":"ContainerDied","Data":"472aee41caef343cf28c4c1914efc2d5cd28c3c34bc1c7c92b6e91a982a7789d"} Jan 27 07:17:55 crc kubenswrapper[4872]: I0127 07:17:55.661411 4872 scope.go:117] "RemoveContainer" containerID="406c23203ee2950ba8a11a72f6573501d2fd5b2a46d4b40dc72fe9571009465e" Jan 27 07:17:55 crc kubenswrapper[4872]: I0127 07:17:55.661461 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rnvg5" Jan 27 07:17:55 crc kubenswrapper[4872]: I0127 07:17:55.679641 4872 scope.go:117] "RemoveContainer" containerID="13993eb589e617e84302c3381f2585d9817ad04588daedebe11fc3ad50005c5a" Jan 27 07:17:55 crc kubenswrapper[4872]: I0127 07:17:55.699879 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rnvg5"] Jan 27 07:17:55 crc kubenswrapper[4872]: I0127 07:17:55.707761 4872 scope.go:117] "RemoveContainer" containerID="52255533101ec432f01ff03ae7fab3c01ababe2510c6137aca7809f71fa3b727" Jan 27 07:17:55 crc kubenswrapper[4872]: I0127 07:17:55.713217 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rnvg5"] Jan 27 07:17:56 crc kubenswrapper[4872]: I0127 07:17:56.106623 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7e9d487-e77c-4aa2-a081-8a8057280ee5" path="/var/lib/kubelet/pods/a7e9d487-e77c-4aa2-a081-8a8057280ee5/volumes" Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.230983 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-695c4"] Jan 27 07:18:15 crc kubenswrapper[4872]: E0127 07:18:15.234133 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e9d487-e77c-4aa2-a081-8a8057280ee5" containerName="extract-content" Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.234237 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e9d487-e77c-4aa2-a081-8a8057280ee5" containerName="extract-content" Jan 27 07:18:15 crc kubenswrapper[4872]: E0127 07:18:15.234315 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e9d487-e77c-4aa2-a081-8a8057280ee5" containerName="registry-server" Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.234404 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e9d487-e77c-4aa2-a081-8a8057280ee5" containerName="registry-server" Jan 27 07:18:15 crc kubenswrapper[4872]: E0127 07:18:15.234474 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e9d487-e77c-4aa2-a081-8a8057280ee5" containerName="extract-utilities" Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.234528 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e9d487-e77c-4aa2-a081-8a8057280ee5" containerName="extract-utilities" Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.234707 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e9d487-e77c-4aa2-a081-8a8057280ee5" containerName="registry-server" Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.236133 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.266017 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-695c4"] Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.369150 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbgtf\" (UniqueName: \"kubernetes.io/projected/c66c6039-6da2-4e24-829e-9d4df43e51df-kube-api-access-bbgtf\") pod \"certified-operators-695c4\" (UID: \"c66c6039-6da2-4e24-829e-9d4df43e51df\") " pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.369266 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c66c6039-6da2-4e24-829e-9d4df43e51df-catalog-content\") pod \"certified-operators-695c4\" (UID: \"c66c6039-6da2-4e24-829e-9d4df43e51df\") " pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.369347 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c66c6039-6da2-4e24-829e-9d4df43e51df-utilities\") pod \"certified-operators-695c4\" (UID: \"c66c6039-6da2-4e24-829e-9d4df43e51df\") " pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.470445 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbgtf\" (UniqueName: \"kubernetes.io/projected/c66c6039-6da2-4e24-829e-9d4df43e51df-kube-api-access-bbgtf\") pod \"certified-operators-695c4\" (UID: \"c66c6039-6da2-4e24-829e-9d4df43e51df\") " pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.470744 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c66c6039-6da2-4e24-829e-9d4df43e51df-catalog-content\") pod \"certified-operators-695c4\" (UID: \"c66c6039-6da2-4e24-829e-9d4df43e51df\") " pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.470861 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c66c6039-6da2-4e24-829e-9d4df43e51df-utilities\") pod \"certified-operators-695c4\" (UID: \"c66c6039-6da2-4e24-829e-9d4df43e51df\") " pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.471290 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c66c6039-6da2-4e24-829e-9d4df43e51df-utilities\") pod \"certified-operators-695c4\" (UID: \"c66c6039-6da2-4e24-829e-9d4df43e51df\") " pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.471291 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c66c6039-6da2-4e24-829e-9d4df43e51df-catalog-content\") pod \"certified-operators-695c4\" (UID: \"c66c6039-6da2-4e24-829e-9d4df43e51df\") " pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.489372 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbgtf\" (UniqueName: \"kubernetes.io/projected/c66c6039-6da2-4e24-829e-9d4df43e51df-kube-api-access-bbgtf\") pod \"certified-operators-695c4\" (UID: \"c66c6039-6da2-4e24-829e-9d4df43e51df\") " pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:15 crc kubenswrapper[4872]: I0127 07:18:15.565219 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:16 crc kubenswrapper[4872]: I0127 07:18:16.031732 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-695c4"] Jan 27 07:18:16 crc kubenswrapper[4872]: I0127 07:18:16.802950 4872 generic.go:334] "Generic (PLEG): container finished" podID="c66c6039-6da2-4e24-829e-9d4df43e51df" containerID="fee1015dbc946bb14a392b302f39d36aa413a7da609a98dc1b4fc217fdc67a2f" exitCode=0 Jan 27 07:18:16 crc kubenswrapper[4872]: I0127 07:18:16.803032 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-695c4" event={"ID":"c66c6039-6da2-4e24-829e-9d4df43e51df","Type":"ContainerDied","Data":"fee1015dbc946bb14a392b302f39d36aa413a7da609a98dc1b4fc217fdc67a2f"} Jan 27 07:18:16 crc kubenswrapper[4872]: I0127 07:18:16.803464 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-695c4" event={"ID":"c66c6039-6da2-4e24-829e-9d4df43e51df","Type":"ContainerStarted","Data":"45693d08b676dd97d7dfebf163431e20f617ccfd32d5de0db84f15221e4fc769"} Jan 27 07:18:17 crc kubenswrapper[4872]: I0127 07:18:17.811910 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-695c4" event={"ID":"c66c6039-6da2-4e24-829e-9d4df43e51df","Type":"ContainerStarted","Data":"c177b91f3b0ba4163e31003124c64097c573eda0bd1e398cbd5f563c0baab737"} Jan 27 07:18:18 crc kubenswrapper[4872]: I0127 07:18:18.823583 4872 generic.go:334] "Generic (PLEG): container finished" podID="c66c6039-6da2-4e24-829e-9d4df43e51df" containerID="c177b91f3b0ba4163e31003124c64097c573eda0bd1e398cbd5f563c0baab737" exitCode=0 Jan 27 07:18:18 crc kubenswrapper[4872]: I0127 07:18:18.823625 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-695c4" event={"ID":"c66c6039-6da2-4e24-829e-9d4df43e51df","Type":"ContainerDied","Data":"c177b91f3b0ba4163e31003124c64097c573eda0bd1e398cbd5f563c0baab737"} Jan 27 07:18:19 crc kubenswrapper[4872]: I0127 07:18:19.838158 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-695c4" event={"ID":"c66c6039-6da2-4e24-829e-9d4df43e51df","Type":"ContainerStarted","Data":"4df211dbe9aacc61369b3afdc58a0977a38c309c4a29dc7ea4415ecfe87bf0e8"} Jan 27 07:18:19 crc kubenswrapper[4872]: I0127 07:18:19.861220 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-695c4" podStartSLOduration=2.221978193 podStartE2EDuration="4.861200842s" podCreationTimestamp="2026-01-27 07:18:15 +0000 UTC" firstStartedPulling="2026-01-27 07:18:16.805206977 +0000 UTC m=+1473.332682173" lastFinishedPulling="2026-01-27 07:18:19.444429626 +0000 UTC m=+1475.971904822" observedRunningTime="2026-01-27 07:18:19.858010356 +0000 UTC m=+1476.385485552" watchObservedRunningTime="2026-01-27 07:18:19.861200842 +0000 UTC m=+1476.388676048" Jan 27 07:18:25 crc kubenswrapper[4872]: I0127 07:18:25.002116 4872 patch_prober.go:28] interesting pod/machine-config-daemon-nkvlp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:18:25 crc kubenswrapper[4872]: I0127 07:18:25.002494 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkvlp" podUID="5ea42312-a362-48cd-8387-34c060df18a1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:18:25 crc kubenswrapper[4872]: I0127 07:18:25.566331 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:25 crc kubenswrapper[4872]: I0127 07:18:25.566401 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:25 crc kubenswrapper[4872]: I0127 07:18:25.637696 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:25 crc kubenswrapper[4872]: I0127 07:18:25.950526 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:25 crc kubenswrapper[4872]: I0127 07:18:25.999104 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-695c4"] Jan 27 07:18:27 crc kubenswrapper[4872]: I0127 07:18:27.896035 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-695c4" podUID="c66c6039-6da2-4e24-829e-9d4df43e51df" containerName="registry-server" containerID="cri-o://4df211dbe9aacc61369b3afdc58a0977a38c309c4a29dc7ea4415ecfe87bf0e8" gracePeriod=2 Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.258589 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.359195 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c66c6039-6da2-4e24-829e-9d4df43e51df-catalog-content\") pod \"c66c6039-6da2-4e24-829e-9d4df43e51df\" (UID: \"c66c6039-6da2-4e24-829e-9d4df43e51df\") " Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.359367 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbgtf\" (UniqueName: \"kubernetes.io/projected/c66c6039-6da2-4e24-829e-9d4df43e51df-kube-api-access-bbgtf\") pod \"c66c6039-6da2-4e24-829e-9d4df43e51df\" (UID: \"c66c6039-6da2-4e24-829e-9d4df43e51df\") " Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.359462 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c66c6039-6da2-4e24-829e-9d4df43e51df-utilities\") pod \"c66c6039-6da2-4e24-829e-9d4df43e51df\" (UID: \"c66c6039-6da2-4e24-829e-9d4df43e51df\") " Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.360348 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c66c6039-6da2-4e24-829e-9d4df43e51df-utilities" (OuterVolumeSpecName: "utilities") pod "c66c6039-6da2-4e24-829e-9d4df43e51df" (UID: "c66c6039-6da2-4e24-829e-9d4df43e51df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.366407 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c66c6039-6da2-4e24-829e-9d4df43e51df-kube-api-access-bbgtf" (OuterVolumeSpecName: "kube-api-access-bbgtf") pod "c66c6039-6da2-4e24-829e-9d4df43e51df" (UID: "c66c6039-6da2-4e24-829e-9d4df43e51df"). InnerVolumeSpecName "kube-api-access-bbgtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.406202 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c66c6039-6da2-4e24-829e-9d4df43e51df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c66c6039-6da2-4e24-829e-9d4df43e51df" (UID: "c66c6039-6da2-4e24-829e-9d4df43e51df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.460776 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c66c6039-6da2-4e24-829e-9d4df43e51df-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.460808 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c66c6039-6da2-4e24-829e-9d4df43e51df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.460821 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbgtf\" (UniqueName: \"kubernetes.io/projected/c66c6039-6da2-4e24-829e-9d4df43e51df-kube-api-access-bbgtf\") on node \"crc\" DevicePath \"\"" Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.905277 4872 generic.go:334] "Generic (PLEG): container finished" podID="c66c6039-6da2-4e24-829e-9d4df43e51df" containerID="4df211dbe9aacc61369b3afdc58a0977a38c309c4a29dc7ea4415ecfe87bf0e8" exitCode=0 Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.905316 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-695c4" event={"ID":"c66c6039-6da2-4e24-829e-9d4df43e51df","Type":"ContainerDied","Data":"4df211dbe9aacc61369b3afdc58a0977a38c309c4a29dc7ea4415ecfe87bf0e8"} Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.905341 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-695c4" event={"ID":"c66c6039-6da2-4e24-829e-9d4df43e51df","Type":"ContainerDied","Data":"45693d08b676dd97d7dfebf163431e20f617ccfd32d5de0db84f15221e4fc769"} Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.905349 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-695c4" Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.905359 4872 scope.go:117] "RemoveContainer" containerID="4df211dbe9aacc61369b3afdc58a0977a38c309c4a29dc7ea4415ecfe87bf0e8" Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.929232 4872 scope.go:117] "RemoveContainer" containerID="c177b91f3b0ba4163e31003124c64097c573eda0bd1e398cbd5f563c0baab737" Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.947997 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-695c4"] Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.960252 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-695c4"] Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.964929 4872 scope.go:117] "RemoveContainer" containerID="fee1015dbc946bb14a392b302f39d36aa413a7da609a98dc1b4fc217fdc67a2f" Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.980555 4872 scope.go:117] "RemoveContainer" containerID="4df211dbe9aacc61369b3afdc58a0977a38c309c4a29dc7ea4415ecfe87bf0e8" Jan 27 07:18:28 crc kubenswrapper[4872]: E0127 07:18:28.981271 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4df211dbe9aacc61369b3afdc58a0977a38c309c4a29dc7ea4415ecfe87bf0e8\": container with ID starting with 4df211dbe9aacc61369b3afdc58a0977a38c309c4a29dc7ea4415ecfe87bf0e8 not found: ID does not exist" containerID="4df211dbe9aacc61369b3afdc58a0977a38c309c4a29dc7ea4415ecfe87bf0e8" Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.981303 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df211dbe9aacc61369b3afdc58a0977a38c309c4a29dc7ea4415ecfe87bf0e8"} err="failed to get container status \"4df211dbe9aacc61369b3afdc58a0977a38c309c4a29dc7ea4415ecfe87bf0e8\": rpc error: code = NotFound desc = could not find container \"4df211dbe9aacc61369b3afdc58a0977a38c309c4a29dc7ea4415ecfe87bf0e8\": container with ID starting with 4df211dbe9aacc61369b3afdc58a0977a38c309c4a29dc7ea4415ecfe87bf0e8 not found: ID does not exist" Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.981339 4872 scope.go:117] "RemoveContainer" containerID="c177b91f3b0ba4163e31003124c64097c573eda0bd1e398cbd5f563c0baab737" Jan 27 07:18:28 crc kubenswrapper[4872]: E0127 07:18:28.982113 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c177b91f3b0ba4163e31003124c64097c573eda0bd1e398cbd5f563c0baab737\": container with ID starting with c177b91f3b0ba4163e31003124c64097c573eda0bd1e398cbd5f563c0baab737 not found: ID does not exist" containerID="c177b91f3b0ba4163e31003124c64097c573eda0bd1e398cbd5f563c0baab737" Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.982144 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c177b91f3b0ba4163e31003124c64097c573eda0bd1e398cbd5f563c0baab737"} err="failed to get container status \"c177b91f3b0ba4163e31003124c64097c573eda0bd1e398cbd5f563c0baab737\": rpc error: code = NotFound desc = could not find container \"c177b91f3b0ba4163e31003124c64097c573eda0bd1e398cbd5f563c0baab737\": container with ID starting with c177b91f3b0ba4163e31003124c64097c573eda0bd1e398cbd5f563c0baab737 not found: ID does not exist" Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.982171 4872 scope.go:117] "RemoveContainer" containerID="fee1015dbc946bb14a392b302f39d36aa413a7da609a98dc1b4fc217fdc67a2f" Jan 27 07:18:28 crc kubenswrapper[4872]: E0127 07:18:28.982415 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fee1015dbc946bb14a392b302f39d36aa413a7da609a98dc1b4fc217fdc67a2f\": container with ID starting with fee1015dbc946bb14a392b302f39d36aa413a7da609a98dc1b4fc217fdc67a2f not found: ID does not exist" containerID="fee1015dbc946bb14a392b302f39d36aa413a7da609a98dc1b4fc217fdc67a2f" Jan 27 07:18:28 crc kubenswrapper[4872]: I0127 07:18:28.982444 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fee1015dbc946bb14a392b302f39d36aa413a7da609a98dc1b4fc217fdc67a2f"} err="failed to get container status \"fee1015dbc946bb14a392b302f39d36aa413a7da609a98dc1b4fc217fdc67a2f\": rpc error: code = NotFound desc = could not find container \"fee1015dbc946bb14a392b302f39d36aa413a7da609a98dc1b4fc217fdc67a2f\": container with ID starting with fee1015dbc946bb14a392b302f39d36aa413a7da609a98dc1b4fc217fdc67a2f not found: ID does not exist" Jan 27 07:18:30 crc kubenswrapper[4872]: I0127 07:18:30.110367 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c66c6039-6da2-4e24-829e-9d4df43e51df" path="/var/lib/kubelet/pods/c66c6039-6da2-4e24-829e-9d4df43e51df/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136063345024453 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136063346017371 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136057751016517 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136057751015467 5ustar corecore